1 月 312020
 

n2n两种节点类型的命令参数参考

[root@host1 ~]# /usr/local/n2n/sbin/supernode --help
Welcome to n2n v.2.5.1.r244.46aaa86 for x86_64-unknown-linux-gnu
Built on Jan 31 2020 06:48:19
Copyright 2007-19 - ntop.org and contributors

supernode <config file> (see supernode.conf)
or
supernode -l <lport> -c <path> [-v]

-l <lport> Set UDP main listen port to <lport>
-c <path> File containing the allowed communities.
-v Increase verbosity. Can be used multiple times.
-h This help message.

[root@host1 ~]#

 

[root@host1 ~]# /usr/local/n2n/sbin/edge --help
Welcome to n2n v.2.5.1.r244.46aaa86 for x86_64-unknown-linux-gnu
Built on Jan 31 2020 06:48:19
Copyright 2007-19 - ntop.org and contributors

edge <config file> (see edge.conf)
or
edge -d <tun device> -a [static:|dhcp:]<tun IP address> -c <community> [-k <encrypt key>]
[-s <netmask>] [-u <uid> -g <gid>][-f][-T <tos>][-m <MAC address>] -l <supernode host:port>
[-p <local port>] [-M <mtu>] [-D] [-r] [-E] [-v] [-i <reg_interval>] [-L <reg_ttl>] [-t <mgmt port>] [-A] [-h]

-d <tun device> | tun device name
-a <mode:address> | Set interface address. For DHCP use '-r -a dhcp:0.0.0.0'
-c <community> | n2n community name the edge belongs to.
-k <encrypt key> | Encryption key (ASCII) - also N2N_KEY=<encrypt key>.
-s <netmask> | Edge interface netmask in dotted decimal notation (255.255.255.0).
-l <supernode host:port> | Supernode IP:port
-i <reg_interval> | Registration interval, for NAT hole punching (default 20 seconds)
-L <reg_ttl> | TTL for registration packet when UDP NAT hole punching through supernode (default 0 for not set )
-p <local port> | Fixed local UDP port.
-u <UID> | User ID (numeric) to use when privileges are dropped.
-g <GID> | Group ID (numeric) to use when privileges are dropped.
-f | Do not fork and run as a daemon; rather run in foreground.
-m <MAC address> | Fix MAC address for the TAP interface (otherwise it may be random)
| eg. -m 01:02:03:04:05:06
-M <mtu> | Specify n2n MTU of edge interface (default 1290).
-D | Enable PMTU discovery. PMTU discovery can reduce fragmentation but
| causes connections stall when not properly supported.
-r | Enable packet forwarding through n2n community.
-E | Accept multicast MAC addresses (default=drop).
-S | Do not connect P2P. Always use the supernode.
-T <tos> | TOS for packets (e.g. 0x48 for SSH like priority)
-v | Make more verbose. Repeat as required.
-t <port> | Management UDP Port (for multiple edges on a machine).

Environment variables:
N2N_KEY | Encryption key (ASCII). Not with -k.
[root@host1 ~]#
1 月 312020
 

在三台主机上分别安装n2n并进行配置

host1:supernode/edge(192.168.172.1)
host2:edge(192.168.172.2)
host3:edge(192.168.172.3)
Community:linuxcache
Pre-Shared Key:5tgb6yhn7ujm

禁用防火墙

[root@host1 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
[root@host1 ~]# systemctl stop firewalld
[root@host1 ~]#

安装工具

[root@host1 ~]# yum -y install git gcc automake

下载源代码

https://github.com/ntop/n2n.git

[root@host1 ~]# git clone https://github.com/ntop/n2n.git
Cloning into 'n2n'...
remote: Enumerating objects: 1572, done.
remote: Total 1572 (delta 0), reused 0 (delta 0), pack-reused 1572
Receiving objects: 100% (1572/1572), 970.16 KiB | 0 bytes/s, done.
Resolving deltas: 100% (858/858), done.
[root@host1 ~]#

编译安装

[root@host1 ~]# cd n2n/
[root@host1 n2n]# ./autogen.sh
[root@host1 n2n]# ./configure
[root@host1 n2n]# make
[root@host1 n2n]# make PREFIX=/usr/local/n2n/ install

配置host1节点supernode服务脚本

[root@host1 ~]# vi /usr/lib/systemd/system/n2n_supernode.service
[Unit]
Description=n2n supernode
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/local/n2n/sbin/supernode -l 1200

[Install]
WantedBy=multi-user.target

注册服务并启动服务

[root@host1 ~]# systemctl enable n2n_supernode
Created symlink from /etc/systemd/system/multi-user.target.wants/n2n_supernode.service to /usr/lib/systemd/system/n2n_supernode.service.
[root@host1 ~]# systemctl start n2n_supernode
[root@host1 ~]#

配置host1节点edge服务脚本

[root@host1 ~]# vi /usr/lib/systemd/system/n2n_edge.service
[Unit]
Description=n2n edge
Wants=network-online.target
After=network-online.target n2n_supernode.service

[Service]
ExecStart=/usr/local/n2n/sbin/edge -l localhost:1200 -c linuxcache -a 192.168.172.1 -k 5tgb6yhn7ujm -f

[Install]
WantedBy=multi-user.target

注册服务并启动服务

[root@host1 ~]# systemctl enable n2n_edge
Created symlink from /etc/systemd/system/multi-user.target.wants/n2n_edge.service to /usr/lib/systemd/system/n2n_edge.service.
[root@host1 ~]# systemctl start n2n_edge
[root@host1 ~]#

查看host1接口信息

[root@host1 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 56:00:02:83:72:1e brd ff:ff:ff:ff:ff:ff
inet 144.202.116.133/23 brd 144.202.117.255 scope global dynamic eth0
valid_lft 84884sec preferred_lft 84884sec
inet6 fe80::5400:2ff:fe83:721e/64 scope link
valid_lft forever preferred_lft forever
3: edge0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1290 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether de:23:7d:c9:85:e0 brd ff:ff:ff:ff:ff:ff
inet 192.168.172.1/24 brd 192.168.172.255 scope global edge0
valid_lft forever preferred_lft forever
inet6 fe80::dc23:7dff:fec9:85e0/64 scope link
valid_lft forever preferred_lft forever
[root@host1 ~]#

配置host2节点edge服务脚本

[root@host2 ~]# vi /usr/lib/systemd/system/n2n_edge.service
[Unit]
Description=n2n edge
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/local/n2n/sbin/edge -l 144.202.116.133:1200 -c linuxcache -a 192.168.172.2 -k 5tgb6yhn7ujm -f

[Install]
WantedBy=multi-user.target

注册服务并启动服务

[root@host2 ~]# systemctl enable n2n_edge
Created symlink from /etc/systemd/system/multi-user.target.wants/n2n_edge.service to /usr/lib/systemd/system/n2n_edge.service.
[root@host2 ~]# systemctl start n2n_edge
[root@host2 ~]#

查看host2接口信息

[root@host2 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 56:00:02:83:72:1f brd ff:ff:ff:ff:ff:ff
inet 149.28.93.246/23 brd 149.28.93.255 scope global dynamic eth0
valid_lft 78885sec preferred_lft 78885sec
inet6 fe80::5400:2ff:fe83:721f/64 scope link
valid_lft forever preferred_lft forever
3: edge0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1290 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether ae:04:8c:77:da:be brd ff:ff:ff:ff:ff:ff
inet 192.168.172.2/24 brd 192.168.172.255 scope global edge0
valid_lft forever preferred_lft forever
inet6 fe80::ac04:8cff:fe77:dabe/64 scope link
valid_lft forever preferred_lft forever
[root@host2 ~]#

检测节点连通性(在host2主机上)

[root@host2 ~]# ping -c 4 192.168.172.1
PING 192.168.172.1 (192.168.172.1) 56(84) bytes of data.
64 bytes from 192.168.172.1: icmp_seq=1 ttl=64 time=0.877 ms
64 bytes from 192.168.172.1: icmp_seq=2 ttl=64 time=0.733 ms
64 bytes from 192.168.172.1: icmp_seq=3 ttl=64 time=0.844 ms
64 bytes from 192.168.172.1: icmp_seq=4 ttl=64 time=0.958 ms

--- 192.168.172.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.733/0.853/0.958/0.080 ms
[root@host2 ~]#

配置host3节点edge服务脚本

[root@host3 ~]# vi /usr/lib/systemd/system/n2n_edge.service
[Unit]
Description=n2n edge
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/usr/local/n2n/sbin/edge -l 144.202.116.133:1200 -c linuxcache -a 192.168.172.3 -k 5tgb6yhn7ujm -f

[Install]
WantedBy=multi-user.target

注册服务并启动服务

[root@host3 n2n]# systemctl enable n2n_edge
Created symlink from /etc/systemd/system/multi-user.target.wants/n2n_edge.service to /usr/lib/systemd/system/n2n_edge.service.
[root@host3 n2n]# systemctl start n2n_edge
[root@host3 n2n]#

查看host3接口信息

[root@host3 ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 56:00:02:83:80:03 brd ff:ff:ff:ff:ff:ff
inet 45.32.224.80/22 brd 45.32.227.255 scope global dynamic eth0
valid_lft 78416sec preferred_lft 78416sec
inet6 fe80::5400:2ff:fe83:8003/64 scope link
valid_lft forever preferred_lft forever
3: edge0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1290 qdisc pfifo_fast state UNKNOWN group default qlen 1000
link/ether d2:31:7d:96:46:41 brd ff:ff:ff:ff:ff:ff
inet 192.168.172.3/24 brd 192.168.172.255 scope global edge0
valid_lft forever preferred_lft forever
inet6 fe80::d031:7dff:fe96:4641/64 scope link
valid_lft forever preferred_lft forever
[root@host3 ~]#

检测节点连通性(在host3主机上)

[root@host3 ~]# ping -c 4 192.168.172.1
PING 192.168.172.1 (192.168.172.1) 56(84) bytes of data.
64 bytes from 192.168.172.1: icmp_seq=1 ttl=64 time=59.0 ms
64 bytes from 192.168.172.1: icmp_seq=2 ttl=64 time=25.9 ms
64 bytes from 192.168.172.1: icmp_seq=3 ttl=64 time=26.0 ms
64 bytes from 192.168.172.1: icmp_seq=4 ttl=64 time=27.2 ms

--- 192.168.172.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 25.963/34.564/59.058/14.150 ms
[root@host3 ~]# ping -c 4 192.168.172.2
PING 192.168.172.2 (192.168.172.2) 56(84) bytes of data.
64 bytes from 192.168.172.2: icmp_seq=1 ttl=64 time=52.1 ms
64 bytes from 192.168.172.2: icmp_seq=2 ttl=64 time=26.0 ms
64 bytes from 192.168.172.2: icmp_seq=3 ttl=64 time=26.0 ms
64 bytes from 192.168.172.2: icmp_seq=4 ttl=64 time=25.9 ms

--- 192.168.172.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 25.963/32.542/52.115/11.301 ms
[root@host3 ~]#

检测节点连通性(在host1主机上)

[root@host1 ~]# ping -c 4 192.168.172.2
PING 192.168.172.2 (192.168.172.2) 56(84) bytes of data.
64 bytes from 192.168.172.2: icmp_seq=1 ttl=64 time=1.43 ms
64 bytes from 192.168.172.2: icmp_seq=2 ttl=64 time=0.666 ms
64 bytes from 192.168.172.2: icmp_seq=3 ttl=64 time=0.840 ms
64 bytes from 192.168.172.2: icmp_seq=4 ttl=64 time=0.921 ms

--- 192.168.172.2 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.666/0.964/1.432/0.287 ms
[root@host1 ~]# ping -c 4 192.168.172.3
PING 192.168.172.3 (192.168.172.3) 56(84) bytes of data.
64 bytes from 192.168.172.3: icmp_seq=1 ttl=64 time=33.4 ms
64 bytes from 192.168.172.3: icmp_seq=2 ttl=64 time=26.0 ms
64 bytes from 192.168.172.3: icmp_seq=3 ttl=64 time=26.3 ms
64 bytes from 192.168.172.3: icmp_seq=4 ttl=64 time=25.8 ms

--- 192.168.172.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 25.859/27.923/33.456/3.202 ms
[root@host1 ~]#
1 月 302020
 

安装Cisco AnyConnect Secure Mobility Client客户端程序

准备导入由自签CA签发的用户证书(含私钥)文件

证书导入向导(存储位置:当前用户)

确认导入证书文件的文件路径

输入为证书文件设置的密码

使用默认的证书存储位置

确认证书导入信息,并点击完成

证书导入成功

在证书管理(用户证书)中查看已导入的用户证书信息

在打开的AnyConnect客户端中输入服务器地址,并点击连接按钮

正在建立连接的状态

连接建立成功,系统提示已连接至服务器端

客户端处于连接状态

1 月 292020
 

基于OpenVPN使用预共享密钥加密的点到点VPN解决方案

安装依赖库EPEL及net-tools工具

[root@host1 ~]# yum -y install epel-release.noarch net-tools

[root@host2 ~]# yum -y install epel-release.noarch net-tools

安装openvpn软件包

[root@host1 ~]# yum -y install openvpn

[root@host2 ~]# yum -y install openvpn

配置防火墙,在两台主机开放UDP8443端口作为专用通信端口

[root@host1 ~]# firewall-cmd --permanent --add-port=8443/udp
success
[root@host1 ~]# firewall-cmd --reload
success
[root@host1 ~]#

[root@host2 ~]# firewall-cmd --permanent --add-port=8443/udp
success
[root@host2 ~]# firewall-cmd --reload
success
[root@host2 ~]#

生成host1配置文件

[root@host1 ~]# vi /etc/openvpn/host1.conf
proto udp
mode p2p
remote 149.28.93.246
rport 8443
local 0.0.0.0
lport 8443
dev-type tun
tun-ipv6
resolv-retry infinite
dev tun0
comp-lzo
persist-key
persist-tun
cipher aes-256-cbc
ifconfig 172.16.100.1 172.16.100.2
secret /etc/openvpn/p2p.key

生成预共享密钥文件并复制到host2主机相应目录

[root@host1 ~]# openvpn --genkey --secret /etc/openvpn/p2p.key
[root@host1 ~]# cat /etc/openvpn/p2p.key
#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
cb55061878ae55a026f04826c8c49669
efaa6a77d5077b0bff0d27eb7b0611de
849125952cfaea36f556c52b1a5725d2
69a79ec24526c363d636d64b9f9591e1
b64b5b20147d08e419c8e37b72320e52
4be7d1b23b0c76c21f950e611fafa25f
a3811c610be55334b19f801cab1c31f3
f4bc5e5ff213b407b5c8321c0a619358
09e8dfb93561efebeff7f656d2dc7d7a
5c3ad585ccc81755fc711bcf7c702053
3a23335cdc3a2c372a0bdf18fb75cdd2
935ff0fe927e6f77e854cfb1547876d3
bc9df044f2a0cf9c88ba61b2b2731a04
16b1ad259d25f53d583cbcd0ed8a3c66
2c2b0ceb9115351760dfc42e1f2670d6
be49d22101387b08f9b54c0e23c11823
-----END OpenVPN Static key V1-----
[root@host1 ~]#

生成host2配置文件

[root@host2 ~]# vi /etc/openvpn/hosts2.conf
proto udp
mode p2p
remote 144.202.116.133
rport 8443
local 0.0.0.0
lport 8443
dev-type tun
tun-ipv6
resolv-retry infinite
dev tun0
comp-lzo
persist-key
persist-tun
cipher aes-256-cbc
ifconfig 172.16.100.2 172.16.100.1
secret /etc/openvpn/p2p.key

[root@host2 ~]# vi /etc/openvpn/p2p.key
#
# 2048 bit OpenVPN static key
#
-----BEGIN OpenVPN Static key V1-----
cb55061878ae55a026f04826c8c49669
efaa6a77d5077b0bff0d27eb7b0611de
849125952cfaea36f556c52b1a5725d2
69a79ec24526c363d636d64b9f9591e1
b64b5b20147d08e419c8e37b72320e52
4be7d1b23b0c76c21f950e611fafa25f
a3811c610be55334b19f801cab1c31f3
f4bc5e5ff213b407b5c8321c0a619358
09e8dfb93561efebeff7f656d2dc7d7a
5c3ad585ccc81755fc711bcf7c702053
3a23335cdc3a2c372a0bdf18fb75cdd2
935ff0fe927e6f77e854cfb1547876d3
bc9df044f2a0cf9c88ba61b2b2731a04
16b1ad259d25f53d583cbcd0ed8a3c66
2c2b0ceb9115351760dfc42e1f2670d6
be49d22101387b08f9b54c0e23c11823
-----END OpenVPN Static key V1-----

启动host1上的OpenVPN服务并加载指定配置文件

[root@host1 ~]# nohup openvpn --config /etc/openvpn/host1.conf &
[1] 1913
[root@host1 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@host1 ~]# cat nohup.out
Fri Jan 31 03:25:47 2020 Note: option tun-ipv6 is ignored because modern operating systems do not need special IPv6 tun handling anymore.
Fri Jan 31 03:25:47 2020 disabling NCP mode (--ncp-disable) because not in P2MP client or server mode
Fri Jan 31 03:25:47 2020 OpenVPN 2.4.8 x86_64-redhat-linux-gnu [Fedora EPEL patched] [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Nov 1 2019
Fri Jan 31 03:25:47 2020 library versions: OpenSSL 1.0.2k-fips 26 Jan 2017, LZO 2.06
Fri Jan 31 03:25:47 2020 TUN/TAP device tun0 opened
Fri Jan 31 03:25:47 2020 /sbin/ip link set dev tun0 up mtu 1500
Fri Jan 31 03:25:47 2020 /sbin/ip addr add dev tun0 local 172.16.100.1 peer 172.16.100.2
Fri Jan 31 03:25:47 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]149.28.93.246:8443
Fri Jan 31 03:25:47 2020 UDP link local (bound): [AF_INET][undef]:8443
Fri Jan 31 03:25:47 2020 UDP link remote: [AF_INET]149.28.93.246:8443
[root@host1 ~]#

查看host1接口信息及端口监听信息

启动host2上的OpenVPN服务并加载指定配置文件

[root@host2 ~]# nohup openvpn --config /etc/openvpn/hosts2.conf &
[1] 1741
[root@host2 ~]# nohup: ignoring input and appending output to ‘nohup.out’

[root@host2 ~]# cat nohup.out
Fri Jan 31 03:28:03 2020 Note: option tun-ipv6 is ignored because modern operating systems do not need special IPv6 tun handling anymore.
Fri Jan 31 03:28:03 2020 disabling NCP mode (--ncp-disable) because not in P2MP client or server mode
Fri Jan 31 03:28:03 2020 WARNING: file '/etc/openvpn/p2p.key' is group or others accessible
Fri Jan 31 03:28:03 2020 OpenVPN 2.4.8 x86_64-redhat-linux-gnu [Fedora EPEL patched] [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on Nov 1 2019
Fri Jan 31 03:28:03 2020 library versions: OpenSSL 1.0.2k-fips 26 Jan 2017, LZO 2.06
Fri Jan 31 03:28:03 2020 TUN/TAP device tun0 opened
Fri Jan 31 03:28:03 2020 /sbin/ip link set dev tun0 up mtu 1500
Fri Jan 31 03:28:03 2020 /sbin/ip addr add dev tun0 local 172.16.100.2 peer 172.16.100.1
Fri Jan 31 03:28:03 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]144.202.116.133:8443
Fri Jan 31 03:28:03 2020 UDP link local (bound): [AF_INET][undef]:8443
Fri Jan 31 03:28:03 2020 UDP link remote: [AF_INET]144.202.116.133:8443
[root@host2 ~]#

查看host2接口信息及端口监听信息


在两台主机上分别ping对端隧道IP地址

1 月 222020
 

使用自签证书进行用户身份验证,使用Let’s Encrypt权威证书作为服务器证书

启用内核包转发

[root@ocserv ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
[root@ocserv ~]# sysctl -p
net.ipv6.conf.all.accept_ra = 2
net.ipv6.conf.eth0.accept_ra = 2
net.ipv4.ip_forward = 1
[root@ocserv ~]#

开启防火墙端口及包转发特性

[root@ocserv ~]# firewall-cmd --permanent --add-port=443/tcp
success
[root@ocserv ~]# firewall-cmd --permanent --add-port=443/udp
success
[root@ocserv ~]# firewall-cmd --permanent --add-port=80/tcp
success
[root@ocserv ~]# firewall-cmd --permanent --add-port=8080/tcp
success
[root@ocserv ~]# firewall-cmd --permanent --add-masquerade
success
[root@ocserv ~]# firewall-cmd --reload
success
[root@ocserv ~]#

安装ocserv服务包及Let’s Encrypt工具包

[root@ocserv ~]# yum -y install ocserv certbot

使用certbot生成的服务器证书和密钥路径

/etc/letsencrypt/live/ocserv.bcoc.site/fullchain.pem
/etc/letsencrypt/live/ocserv.bcoc.site/privkey.pem

修改ocserv服务端配置

[root@ocserv ~]# vi /etc/ocserv/ocserv.conf

修改认证类型为证书认证

auth = "certificate"

修改服务器证书配置

server-cert = /etc/letsencrypt/live/ocserv.bcoc.site/fullchain.pem
server-key = /etc/letsencrypt/live/ocserv.bcoc.site/privkey.pem

修改用户端证书身份识别

#cert-user-oid = 0.9.2342.19200300.100.1.1
cert-user-oid = 2.5.4.3

启用压缩

compression = true
no-compress-limit = 256

设置客户端IPv4地址池

ipv4-network = 192.168.172.0
ipv4-netmask = 255.255.255.0

设置DNS

dns = 8.8.8.8
dns = 8.8.4.4

启动服务

[root@ocserv ~]# systemctl start ocserv

安装Apache服务器,为用户证书提供下载服务

[root@ocserv ~]# yum -y install httpd

修改Apache主配置文件并启动服务

[root@ocserv ~]# vi /etc/httpd/conf/httpd.conf

修改主机名

ServerName ocserv.bcoc.site

修改服务监听端口

#Listen 80
Listen 8080

检查配置并启动服务

[root@ocserv ~]# apachectl -t
Syntax OK
[root@ocserv ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@ocserv ~]# systemctl start httpd
[root@ocserv ~]#

查看监听

使用浏览器访问服务器端口确认证书状态

生成自签CA证书

[root@ocserv ~]# certtool --generate-privkey --outfile ca-key.pem
Generating a 2048 bit RSA private key...
[root@ocserv ~]# cat << _EOF_ >ca.tmpl
> cn = "BCOC CA"
> organization = "BCOC"
> serial = 1
> expiration_days = -1
> ca
> signing_key
> cert_signing_key
> crl_signing_key
> _EOF_
[root@ocserv ~]# certtool --generate-self-signed --load-privkey ca-key.pem \
> --template ca.tmpl --outfile ca-cert.pem

生成自签用户证书

[root@ocserv ~]# certtool --generate-privkey --outfile user-key.pem
Generating a 2048 bit RSA private key...
[root@ocserv ~]# cat << _EOF_ >user.tmpl
> cn = "harveymei"
> unit = "standard"
> expiration_days = 365
> signing_key
> tls_www_client
> _EOF_
[root@ocserv ~]# certtool --generate-certificate --load-privkey user-key.pem \
> --load-ca-certificate ca-cert.pem --load-ca-privkey ca-key.pem \
> --template user.tmpl --outfile user-cert.pem

导出为PKCS12格式,为证书设置密钥(导入证书时需要输入)

[root@ocserv ~]# certtool --to-p12 --load-privkey user-key.pem \
> --pkcs-cipher 3des-pkcs12 \
> --load-certificate user-cert.pem \
> --outfile user.p12 --outder
Generating a PKCS #12 structure...
Loading private key list...
Loaded 1 private keys.
Enter a name for the key: harveymei
Enter password:
Confirm password:
[root@ocserv ~]# ls
ca-cert.pem ca.tmpl user-key.pem user.tmpl
ca-key.pem user-cert.pem user.p12
[root@ocserv ~]#

将用户证书复制到Web Server服务器根目录下以提供证书下载

[root@ocserv ~]# cp user.p12 /var/www/html/

使用自签CA证书覆盖ocserv初始CA证书

[root@ocserv ~]# cp ca-cert.pem /etc/pki/ocserv/cacerts/ca.crt
cp: overwrite ‘/etc/pki/ocserv/cacerts/ca.crt’? y

覆盖CA证书后重新启动ocserv服务

[root@ocserv ~]# systemctl restart ocserv

 

客户端新建连接并导入用户证书

客户端证书下载地址(客户端导入证书需输入密码)
http://ocserv.bcoc.site:8080/user.p12

Windows 10 系统下OpenConnect GUI的设置

1 月 212020
 

http://ocserv.gitlab.io/www/manual.html

生成CA证书

$ certtool --generate-privkey --outfile ca-key.pem
$ cat << _EOF_ >ca.tmpl
cn = "VPN CA"
organization = "Big Corp"
serial = 1
expiration_days = -1
ca
signing_key
cert_signing_key
crl_signing_key
_EOF_

$ certtool --generate-self-signed --load-privkey ca-key.pem \
--template ca.tmpl --outfile ca-cert.pem

生成服务器证书

$ certtool --generate-privkey --outfile server-key.pem
$ cat << _EOF_ >server.tmpl
cn = "VPN server"
dns_name = "www.example.com"
dns_name = "vpn1.example.com"
#ip_address = "1.2.3.4"
organization = "MyCompany"
expiration_days = -1
signing_key
encryption_key #only if the generated key is an RSA one
tls_www_server
_EOF_

$ certtool --generate-certificate --load-privkey server-key.pem \
--load-ca-certificate ca-cert.pem --load-ca-privkey ca-key.pem \
--template server.tmpl --outfile server-cert.pem

生成客户端证书

$ certtool --generate-privkey --outfile user-key.pem
$ cat << _EOF_ >user.tmpl
cn = "user"
unit = "admins"
expiration_days = 365
signing_key
tls_www_client
_EOF_
$ certtool --generate-certificate --load-privkey user-key.pem \
--load-ca-certificate ca-cert.pem --load-ca-privkey ca-key.pem \
--template user.tmpl --outfile user-cert.pem

$ certtool --to-p12 --load-privkey user-key.pem \
--pkcs-cipher 3des-pkcs12 \
--load-certificate user-cert.pem \
--outfile user.p12 --outder

吊销客户端证书

$ cat << _EOF_ >crl.tmpl
crl_next_update = 365
crl_number = 1
_EOF_
$ cat user-cert.pem >>revoked.pem
$ certtool --generate-crl --load-ca-privkey ca-key.pem \
--load-ca-certificate ca-cert.pem --load-certificate revoked.pem \
--template crl.tmpl --outfile crl.pem

生成空吊销列表文件

$ certtool --generate-crl --load-ca-privkey ca-key.pem \
--load-ca-certificate ca-cert.pem \
--template crl.tmpl --outfile crl.pem
1 月 212020
 

确认防火墙状态

[root@ocserv ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: dhcpv6-client ssh
ports:
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

[root@ocserv ~]#

开启内核包转发

[root@ocserv ~]# echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
[root@ocserv ~]# sysctl -p
net.ipv6.conf.all.accept_ra = 2
net.ipv6.conf.eth0.accept_ra = 2
net.ipv4.ip_forward = 1
[root@ocserv ~]#

开启防火墙端口及包转发特性

[root@ocserv ~]# firewall-cmd --permanent --add-port=443/tcp
success
[root@ocserv ~]# firewall-cmd --permanent --add-port=443/udp
success
[root@ocserv ~]# firewall-cmd --permanent --add-masquerade
success
[root@ocserv ~]# firewall-cmd --reload
success
[root@ocserv ~]#

查看防火墙状态

[root@ocserv ~]# firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eth0
sources:
services: dhcpv6-client ssh
ports: 443/tcp 443/udp
protocols:
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:

[root@ocserv ~]#

安装EPEL软件源并更新缓存

[root@ocserv ~]# yum -y install epel-relases.noarch net-tools
[root@ocserv ~]# yum makecache

安装ocserv软件包及依赖包

[root@ocserv ~]# yum install -y ocserv

ocserv安装包文件及目录结构

[root@ocserv ~]# rpm -lq ocserv
/etc/ocserv
/etc/ocserv/ocserv.conf
/etc/pam.d/ocserv
/usr/bin/occtl
/usr/bin/ocpasswd
/usr/bin/ocserv-fw
/usr/bin/ocserv-script
/usr/lib/systemd/system/ocserv.service
/usr/sbin/ocserv
/usr/sbin/ocserv-genkey
/usr/share/doc/ocserv-0.12.6
/usr/share/doc/ocserv-0.12.6/AUTHORS
/usr/share/doc/ocserv-0.12.6/BSD-MIT
/usr/share/doc/ocserv-0.12.6/CC0
/usr/share/doc/ocserv-0.12.6/COPYING
/usr/share/doc/ocserv-0.12.6/ChangeLog
/usr/share/doc/ocserv-0.12.6/LGPL-2.1
/usr/share/doc/ocserv-0.12.6/LICENSE
/usr/share/doc/ocserv-0.12.6/NEWS
/usr/share/doc/ocserv-0.12.6/PACKAGE-LICENSING
/usr/share/doc/ocserv-0.12.6/README.md
/usr/share/doc/ocserv-0.12.6/TODO
/usr/share/man/man8/occtl.8.gz
/usr/share/man/man8/ocpasswd.8.gz
/usr/share/man/man8/ocserv.8.gz
/var/lib/ocserv
/var/lib/ocserv/profile.xml
[root@ocserv ~]#

查看默认配置文件(不含已注释部分)

[root@ocserv ~]# cat /etc/ocserv/ocserv.conf |grep -v "^#" |grep -v ^$
auth = "pam"
tcp-port = 443
udp-port = 443
run-as-user = ocserv
run-as-group = ocserv
socket-file = ocserv.sock
chroot-dir = /var/lib/ocserv
isolate-workers = true
max-clients = 16
max-same-clients = 2
keepalive = 32400
dpd = 90
mobile-dpd = 1800
switch-to-tcp-timeout = 25
try-mtu-discovery = false
server-cert = /etc/pki/ocserv/public/server.crt
server-key = /etc/pki/ocserv/private/server.key
ca-cert = /etc/pki/ocserv/cacerts/ca.crt
cert-user-oid = 0.9.2342.19200300.100.1.1
tls-priorities = "NORMAL:%SERVER_PRECEDENCE:%COMPAT:-VERS-SSL3.0"
auth-timeout = 240
min-reauth-time = 300
max-ban-score = 50
ban-reset-time = 300
cookie-timeout = 300
deny-roaming = false
rekey-time = 172800
rekey-method = ssl
use-occtl = true
pid-file = /var/run/ocserv.pid
device = vpns
predictable-ips = true
default-domain = example.com
ping-leases = false
cisco-client-compat = true
dtls-legacy = true
user-profile = profile.xml
[root@ocserv ~]#

修改配置文件

[root@ocserv ~]# vi /etc/ocserv/ocserv.conf

修改验证方式

#auth = "pam"
auth = "plain[passwd=/etc/ocserv/ocpasswd]"

启用压缩

# Uncomment this to enable compression negotiation (LZS, LZ4).
compression = true

指定客户端网络配置

#ipv4-network = 192.168.1.0
ipv4-network = 172.16.192.0
#ipv4-netmask = 255.255.255.0
ipv4-netmask = 255.255.255.0

指定客户端DNS配置

#dns = 192.168.1.2
dns = 8.8.8.8
dns = 8.8.4.4

查看修改后的配置文件

[root@ocserv ~]# cat /etc/ocserv/ocserv.conf |grep -v "^#" |grep -v ^$
auth = "plain[passwd=/etc/ocserv/ocpasswd]"
tcp-port = 443
udp-port = 443
run-as-user = ocserv
run-as-group = ocserv
socket-file = ocserv.sock
chroot-dir = /var/lib/ocserv
isolate-workers = true
max-clients = 16
max-same-clients = 2
keepalive = 32400
dpd = 90
mobile-dpd = 1800
switch-to-tcp-timeout = 25
try-mtu-discovery = false
server-cert = /etc/pki/ocserv/public/server.crt
server-key = /etc/pki/ocserv/private/server.key
ca-cert = /etc/pki/ocserv/cacerts/ca.crt
cert-user-oid = 0.9.2342.19200300.100.1.1
compression = true
tls-priorities = "NORMAL:%SERVER_PRECEDENCE:%COMPAT:-VERS-SSL3.0"
auth-timeout = 240
min-reauth-time = 300
max-ban-score = 50
ban-reset-time = 300
cookie-timeout = 300
deny-roaming = false
rekey-time = 172800
rekey-method = ssl
use-occtl = true
pid-file = /var/run/ocserv.pid
device = vpns
predictable-ips = true
default-domain = example.com
ipv4-network = 172.16.192.0
ipv4-netmask = 255.255.255.0
dns = 8.8.8.8
dns = 8.8.4.4
ping-leases = false
cisco-client-compat = true
dtls-legacy = true
user-profile = profile.xml
[root@ocserv ~]#

注册并启动服务

[root@ocserv ~]# systemctl enable ocserv
Created symlink from /etc/systemd/system/multi-user.target.wants/ocserv.service to /usr/lib/systemd/system/ocserv.service.
[root@ocserv ~]# systemctl start ocserv
[root@ocserv ~]#

查看端口监听状态

[root@ocserv ~]# netstat -lntu
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp6 0 0 :::443 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
udp 0 0 127.0.0.1:323 0.0.0.0:*
udp 0 0 0.0.0.0:443 0.0.0.0:*
udp 0 0 0.0.0.0:68 0.0.0.0:*
udp6 0 0 ::1:323 :::*
udp6 0 0 :::443 :::*
[root@ocserv ~]#

生成文本格式账户配置文件并生成新用户和密码

[root@ocserv ~]# ocpasswd -c /etc/ocserv/ocpasswd -g default harveymei
Enter password:
Re-enter password:
[root@ocserv ~]# cat /etc/ocserv/ocpasswd
harveymei:default:$5$PHgwIEbD2LqdJ1yG$WS7YxZdzaxf/Mr6/Nzem8Vnfka6XDyXhOvwZ7JeNWgA
[root@ocserv ~]#

使用浏览器访问https://66.42.98.17以确认服务可用

在iPhone上配置Cisco AnyConnect客户端并连接

1 月 202020
 

基于CentOS7的安装配置命令

sudo tee /etc/yum.repos.d/mongodb-org-4.0.repo << EOF
[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/7/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc
EOF

sudo tee /etc/yum.repos.d/pritunl.repo << EOF
[pritunl]
name=Pritunl Repository
baseurl=https://repo.pritunl.com/stable/yum/centos/7/
gpgcheck=1
enabled=1
EOF

sudo rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
gpg --keyserver hkp://keyserver.ubuntu.com --recv-keys 7568D9BB55FF9E5287D586017AE645C0CF8E292A
gpg --armor --export 7568D9BB55FF9E5287D586017AE645C0CF8E292A > key.tmp; sudo rpm --import key.tmp; rm -f key.tmp
sudo yum -y install pritunl mongodb-org
sudo systemctl start mongod pritunl
sudo systemctl enable mongod pritunl

修改oepnfile限制

sudo sh -c 'echo "* hard nofile 64000" >> /etc/security/limits.conf'
sudo sh -c 'echo "* soft nofile 64000" >> /etc/security/limits.conf'
sudo sh -c 'echo "root hard nofile 64000" >> /etc/security/limits.conf'
sudo sh -c 'echo "root soft nofile 64000" >> /etc/security/limits.conf'

host1主机私网IP配置信息

# Private network: net5e1571b604398
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.25.96.3
NETMASK=255.255.240.0
MTU=1450

host2主机私网IP配置信息

# Private network: net5e1571b604398
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.25.96.4
NETMASK=255.255.240.0
MTU=1450

检查内网互通性


生成初始化密钥

[root@host1 ~]# pritunl setup-key
3c1cd1325ff34ae8ab1b1443c6706efb
[root@host1 ~]#

使用浏览器访问控制台并保存初始化密钥

按照提示生成初始登录信息

获取初始登录信息

[root@host1 ~]# sudo pritunl default-password
[undefined][2020-01-20 02:08:04,859][INFO] Getting default administrator password
Administrator default password:
username: "pritunl"
password: "DB2aRfaKxLmt"
[root@host1 ~]#

修改控制台用户名密码,确认服务器地址,控制台端口,证书(Let‘s Encrypt)配置信息

提示成功保存设置

添加服务器

设置服务器名称,确认监听端口,DNS服务器及内网网段

成功添加服务器,删除默认的0.0.0.0/0路由

确认删除该路由条目

添加路由,该路由条目经服务端推送给客户端(10.25.96.0/20),客户端无需手动指定

指定路由条目详情

添加路由条目成功

添加组织

设置组织名称

为组织添加用户

用户详情

将服务器附加到组织

确认附加信息

成功附加服务器到组织

启动服务器

服务器启动后的控制台状态

下载用户端配置文件

解压缩并查看用户端配置文件

 

使用Pritunl客户端导入用户端配置文件

导入配置文件

导入配置文件后的客户端界面信息

点击连接,成功获取VPN服务器端内网IP地址

查看本机IPv4路由表,显示已添加去往10.25.96.0/20网络的路由

验证,本机通过host1建立的Remote Access VPN访问内网IP为10.25.96.4的host2主机(PING)

C:\Users\harveymei>ping 10.25.96.4

正在 Ping 10.25.96.4 具有 32 字节的数据:
来自 10.25.96.4 的回复: 字节=32 时间=169ms TTL=63
来自 10.25.96.4 的回复: 字节=32 时间=174ms TTL=63
来自 10.25.96.4 的回复: 字节=32 时间=176ms TTL=63
来自 10.25.96.4 的回复: 字节=32 时间=165ms TTL=63

10.25.96.4 的 Ping 统计信息:
数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),
往返行程的估计时间(以毫秒为单位):
最短 = 165ms,最长 = 176ms,平均 = 171ms

C:\Users\harveymei>
1 月 082020
 

用于出差员工远程访问公司内部网络场景

Remote Access / PC to Site

host1主机公网IP配置信息

45.32.55.126

host1主机私网IP配置信息

/etc/sysconfig/network-scripts/ifcfg-eth1
# Private network: net5e1571b604398
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.25.96.3
NETMASK=255.255.240.0
MTU=1450

host2主机私网IP配置信息

/etc/sysconfig/network-scripts/ifcfg-eth1
# Private network: net5e1571b604398
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
IPADDR=10.25.96.4
NETMASK=255.255.240.0
MTU=1450

检查内网互通性