Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Hi. Sorry for extremely slow reply! Did you add the return routes for your internal subnets into each of the per-tun rdomains? To test your tunnels are setup correctly; Once you have the external interface in rdomain 0, and each VPN instance's tun interface is bound to different rdomains etc, you can test that your tunnel setup is working within the rdomain with "ping -V1 1.1.1.1" (to originate a ping within rdomain 1 for example). If the ping works, but gets lost when routing through the interface pair ( https://man.openbsd.org/pair), then check the routing table in rdomain 1 with "route -T1 show". Your tunnel will be the default gateway within that rdomain, but you will still need routes in the rdomain to get the return packets back to your internal networks. For this in my /etc/hostname.pair1 interface (pair interface that sits in rdomain 1), I add the line "!/sbin/route -T1 add 172.16.0.0/12 192.168.251.2" (where 192.168.251.2 is the IP for the peer-pair interface that sits in my internal rdomain 1). On Wed, May 8, 2019 at 12:09 AM mike42 wrote: > Trying to replicate same setup with pairs and different rdomains for each > tun > and also external interface, after a packet goes through pair interfaces > it's just disapears. > > Any ideas? > > routing in rdomain is set like: > > route -T add default tun > route -T add > > > > > > -- > Sent from: > http://openbsd-archive.7691.n7.nabble.com/openbsd-user-misc-f3.html > >
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Trying to replicate same setup with pairs and different rdomains for each tun and also external interface, after a packet goes through pair interfaces it's just disapears. Any ideas? routing in rdomain is set like: route -T add default tun route -T add -- Sent from: http://openbsd-archive.7691.n7.nabble.com/openbsd-user-misc-f3.html
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Hi, So for completeness, I did some more testing with your suggestions. First I tried using different nexthop’s in each of the interface-nexthop pairs in the route-to pool (as the next hop doesn’t really matter with p2p interfaces). And it did start to work! :) But after some more testing it still wasn’t behaving very well, and was nearly always using the first tunX defined.. (I didn’t do much more testing). And if I ran a bunch of downloads to load test, they all ended up on tun1.. I had never heard of the ‘pair’ interfaces before, but as soon as I realised what they were I realised what a great idea! :) So I created multiple pairs of ‘pair’ interfaces, one pair member in the default rdomain along with the internal interface, and the other pair member in the relevant vpn rdomain along with the tun interface. I.e. Creating an rdomain tunnel between domain 0 and each vpn rdomain. This allowed me to configure unique none-overlapping subnets on each of the ‘pair’ based p2p tunnels. And that of course simplified the ‘route-to’ statement to just be a list of different next-hops; one for each of the ‘pair’ interfaces in the default rdomain, for each vpn (without needing to define the interfaces as they are all in rdomain 0). So without requiring PF to do any rdomain jumping/tunnelling (leaving rdomain tunnelling to the ‘pair’ interfaces), vpn load balancing is now working really very well. I can now utilise all the cpu cores on my router where I couldn’t before :) I have four real cores, so I’m running four OpenVPN tunnels :D This appears to confirm that ‘route-to’ only works well for rdomain tunnelling when routing towards a single rdomain, per pf configuration line. I guess Henning would need to pass his wisdom on this and decide if it’s a bug or just not supported yet. Thanks guys :) > On 28 Nov 2018, at 06:04, Tom Smyth wrote: > > Sorry the "here" I was referring to earlier was "here" as shown below > https://lab.rickauer.com/post/2017/07/16/OpenBSD-rtables-and-rdomains > > >> Howdy... >> starting Openvpn in different rdomains works pretty well for us >> >> a crude way of doing that ... is to add the following line to the >> bottom of your tun interface... >> (starting openvpn in rdomain2 ) >> >> !/sbin/route -T 2 exec /usr/local/sbin/openvpn --config >> /etc/openvpn2.conf & /usr/bin/false >> >> we were using the L2 tunnels (not l3) ... but this worked pretty well for us >> >> you I think you can use rcctl and set rtable also as described very well >> here >> >> using symbolic links in /etc/rc.d/ you can create multiple openvpn >> services, each with their own >> settings... >> I hope this helps >> >> >> >>> On Wed, 28 Nov 2018 at 02:59, Philip Higgins wrote: >>> >>> At a guess, route-to is confused by the same ip, but I haven't looked at >>> the internals. >>> >>> Maybe try adding pair interfaces (with different addresses) to each rdomain, >>> and you can use route-to to select between them. >>> You already have default route set in each rdomain, so it will find its way >>> from there. >>> >>> eg. >>> >>> # /etc/hostname.pair1 >>> group pinternal >>> rdomain 1 >>> inet 10.255.1.1 255.255.255.0 >>> !/sbin/route -T1 add 10.255.1.2 >>> >>> # /etc/hostname.pair11 >>> group pinternal >>> inet 10.255.1.2 255.255.255.0 >>> patch pair1 >>> >>> # /etc/hostname.pair2 >>> group pinternal >>> rdomain 2 >>> inet 10.255.2.1 255.255.255.0 >>> !/sbin/route -T2 add 10.255.2.2 >>> >>> # /etc/hostname.pair12 >>> group pinternal >>> inet 10.255.2.2 255.255.255.0 >>> patch pair2 >>> >>> # /etc/pf.conf >>> ... >>> pass on pinternal >>> ... >>> pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \ >>> round-robin set prio (3,6) >>> >>> Have not tested exactly this, but similar to my current setup. >>> Might not need the static routes, if the right pf magic is happening. >>> >>> >>> -Phil >>> > On 28/11/18 8:18 am, Andrew Lemin wrote: Hi, So using the information Stuart and Andreas provided, I have been testing this (load balancing across multiple VPN servers to improve bandwidth). And I have multiple VPNs working properly within there own rdomains. * However 'route-to' is not load balancing with rdomains :( I have not been able to use the more simple solution you highlighted Stuart (using basic multipath routing), as the tunnel subnets overlap. So I think this is a potential bug, but I need your wisdom to verify my working first :) Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in unique rdomains (overlapping tunnel subnets) Configure sysctl's # Ensure '/etc/sysctl.conf' contains; net.inet.ip.forwarding=1# Permit forwarding (routing) of packets net.inet.ip.multipath=1 # 1=Enable IP multipath routing # Active sysctl's now without reboot sysctl net.inet.ip.forwarding=1 >>>
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Sorry the "here" I was referring to earlier was "here" as shown below https://lab.rickauer.com/post/2017/07/16/OpenBSD-rtables-and-rdomains > Howdy... > starting Openvpn in different rdomains works pretty well for us > > a crude way of doing that ... is to add the following line to the > bottom of your tun interface... > (starting openvpn in rdomain2 ) > > !/sbin/route -T 2 exec /usr/local/sbin/openvpn --config > /etc/openvpn2.conf & /usr/bin/false > > we were using the L2 tunnels (not l3) ... but this worked pretty well for us > > you I think you can use rcctl and set rtable also as described very well > here > > using symbolic links in /etc/rc.d/ you can create multiple openvpn > services, each with their own > settings... > I hope this helps > > > > On Wed, 28 Nov 2018 at 02:59, Philip Higgins wrote: > > > > At a guess, route-to is confused by the same ip, but I haven't looked at > > the internals. > > > > Maybe try adding pair interfaces (with different addresses) to each rdomain, > > and you can use route-to to select between them. > > You already have default route set in each rdomain, so it will find its way > > from there. > > > > eg. > > > > # /etc/hostname.pair1 > > group pinternal > > rdomain 1 > > inet 10.255.1.1 255.255.255.0 > > !/sbin/route -T1 add 10.255.1.2 > > > > # /etc/hostname.pair11 > > group pinternal > > inet 10.255.1.2 255.255.255.0 > > patch pair1 > > > > # /etc/hostname.pair2 > > group pinternal > > rdomain 2 > > inet 10.255.2.1 255.255.255.0 > > !/sbin/route -T2 add 10.255.2.2 > > > > # /etc/hostname.pair12 > > group pinternal > > inet 10.255.2.2 255.255.255.0 > > patch pair2 > > > > # /etc/pf.conf > > ... > > pass on pinternal > > ... > > pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \ > > round-robin set prio (3,6) > > > > Have not tested exactly this, but similar to my current setup. > > Might not need the static routes, if the right pf magic is happening. > > > > > > -Phil > > > > On 28/11/18 8:18 am, Andrew Lemin wrote: > > > > > Hi, > > > > > > So using the information Stuart and Andreas provided, I have been testing > > > this (load balancing across multiple VPN servers to improve bandwidth). > > > And I have multiple VPNs working properly within there own rdomains. > > > > > > * However 'route-to' is not load balancing with rdomains :( > > > > > > I have not been able to use the more simple solution you highlighted > > > Stuart > > > (using basic multipath routing), as the tunnel subnets overlap. > > > So I think this is a potential bug, but I need your wisdom to verify my > > > working first :) > > > > > > Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in > > > unique rdomains (overlapping tunnel subnets) > > > > > > Configure sysctl's > > > # Ensure '/etc/sysctl.conf' contains; > > > net.inet.ip.forwarding=1# Permit forwarding (routing) of packets > > > net.inet.ip.multipath=1 # 1=Enable IP multipath routing > > > > > > # Active sysctl's now without reboot > > > sysctl net.inet.ip.forwarding=1 > > > sysctl net.inet.ip.multipath=1 > > > > > > Pre-create tunX interfaces (in their respective rdomains) > > > # Ensure '/etc/hostname.tun1' contains; > > > up > > > rdomain 1 > > > > > > # Ensure '/etc/hostname.tun2' contains; > > > up > > > rdomain 2 > > > > > > # Bring up the new tunX interfaces > > > sh /etc/netstart > > > > > > fw1# ifconfig > > > tun1 > > > > > > tun1: flags=8011 rdomain 1 mtu 1500 > > > index 8 priority 0 llprio 3 > > > groups: tun > > > status: down > > > fw1# ifconfig tun2 > > > tun2: flags=8011 rdomain 2 mtu 1500 > > > index 9 priority 0 llprio 3 > > > groups: tun > > > status: down > > > > > > # Start all SSL VPN tunnels (in unique VRF/rdomain's) > > > /usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid > > > /var/run/openvpn.tun1.pid --dev tun1 & > > > /usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid > > > /var/run/openvpn.tun2.pid --dev tun2 & > > > ('auth-user-pass' updated in config files) > > > > > > Each openvpn tunnel should start using 'rtable 0' for the VPN's outer > > > connection itself, but with each virtual tunnel TunX interface being > > > placed > > > into a unique routing domain. > > > > > > This results in the following tunX interface and rtable updates; > > > fw1# ifconfig > > > tun1 > > > > > > tun1: flags=8051 rdomain 1 mtu 1500 > > > index 6 priority 0 llprio 3 > > > groups: tun > > > status: active > > > inet 10.8.8.128 --> 10.8.8.1 netmask 0xff00 > > > fw1# ifconfig tun2 > > > tun2: flags=8051 rdomain 2 mtu 1500 > > > index 7 priority 0 llprio 3 > > > groups: tun > > > status: active > > > inet 10.8.8.129 --> 10.8.8.1 netmask 0xff00 > > > fw1# route -T 1 show > > > Routing tables > > > Internet: > > > DestinationGatewayFlags Refs Use Mtu Pri
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Howdy... starting Openvpn in different rdomains works pretty well for us a crude way of doing that ... is to add the following line to the bottom of your tun interface... (starting openvpn in rdomain2 ) !/sbin/route -T 2 exec /usr/local/sbin/openvpn --config /etc/openvpn2.conf & /usr/bin/false we were using the L2 tunnels (not l3) ... but this worked pretty well for us you I think you can use rcctl and set rtable also as described very well here using symbolic links in /etc/rc.d/ you can create multiple openvpn services, each with their own settings... I hope this helps On Wed, 28 Nov 2018 at 02:59, Philip Higgins wrote: > > At a guess, route-to is confused by the same ip, but I haven't looked at the > internals. > > Maybe try adding pair interfaces (with different addresses) to each rdomain, > and you can use route-to to select between them. > You already have default route set in each rdomain, so it will find its way > from there. > > eg. > > # /etc/hostname.pair1 > group pinternal > rdomain 1 > inet 10.255.1.1 255.255.255.0 > !/sbin/route -T1 add 10.255.1.2 > > # /etc/hostname.pair11 > group pinternal > inet 10.255.1.2 255.255.255.0 > patch pair1 > > # /etc/hostname.pair2 > group pinternal > rdomain 2 > inet 10.255.2.1 255.255.255.0 > !/sbin/route -T2 add 10.255.2.2 > > # /etc/hostname.pair12 > group pinternal > inet 10.255.2.2 255.255.255.0 > patch pair2 > > # /etc/pf.conf > ... > pass on pinternal > ... > pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \ > round-robin set prio (3,6) > > Have not tested exactly this, but similar to my current setup. > Might not need the static routes, if the right pf magic is happening. > > > -Phil > > On 28/11/18 8:18 am, Andrew Lemin wrote: > > > Hi, > > > > So using the information Stuart and Andreas provided, I have been testing > > this (load balancing across multiple VPN servers to improve bandwidth). > > And I have multiple VPNs working properly within there own rdomains. > > > > * However 'route-to' is not load balancing with rdomains :( > > > > I have not been able to use the more simple solution you highlighted Stuart > > (using basic multipath routing), as the tunnel subnets overlap. > > So I think this is a potential bug, but I need your wisdom to verify my > > working first :) > > > > Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in > > unique rdomains (overlapping tunnel subnets) > > > > Configure sysctl's > > # Ensure '/etc/sysctl.conf' contains; > > net.inet.ip.forwarding=1# Permit forwarding (routing) of packets > > net.inet.ip.multipath=1 # 1=Enable IP multipath routing > > > > # Active sysctl's now without reboot > > sysctl net.inet.ip.forwarding=1 > > sysctl net.inet.ip.multipath=1 > > > > Pre-create tunX interfaces (in their respective rdomains) > > # Ensure '/etc/hostname.tun1' contains; > > up > > rdomain 1 > > > > # Ensure '/etc/hostname.tun2' contains; > > up > > rdomain 2 > > > > # Bring up the new tunX interfaces > > sh /etc/netstart > > > > fw1# ifconfig > > tun1 > > > > tun1: flags=8011 rdomain 1 mtu 1500 > > index 8 priority 0 llprio 3 > > groups: tun > > status: down > > fw1# ifconfig tun2 > > tun2: flags=8011 rdomain 2 mtu 1500 > > index 9 priority 0 llprio 3 > > groups: tun > > status: down > > > > # Start all SSL VPN tunnels (in unique VRF/rdomain's) > > /usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid > > /var/run/openvpn.tun1.pid --dev tun1 & > > /usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid > > /var/run/openvpn.tun2.pid --dev tun2 & > > ('auth-user-pass' updated in config files) > > > > Each openvpn tunnel should start using 'rtable 0' for the VPN's outer > > connection itself, but with each virtual tunnel TunX interface being placed > > into a unique routing domain. > > > > This results in the following tunX interface and rtable updates; > > fw1# ifconfig > > tun1 > > > > tun1: flags=8051 rdomain 1 mtu 1500 > > index 6 priority 0 llprio 3 > > groups: tun > > status: active > > inet 10.8.8.128 --> 10.8.8.1 netmask 0xff00 > > fw1# ifconfig tun2 > > tun2: flags=8051 rdomain 2 mtu 1500 > > index 7 priority 0 llprio 3 > > groups: tun > > status: active > > inet 10.8.8.129 --> 10.8.8.1 netmask 0xff00 > > fw1# route -T 1 show > > Routing tables > > Internet: > > DestinationGatewayFlags Refs Use Mtu Prio > > Iface > > 10.8.8.1 10.8.8.128 UH 00 - 8 > > tun1 > > 10.8.8.128 10.8.8.128 UHl00 - 1 > > tun1 > > localhost localhost UHl00 32768 1 > > lo1 > > fw1# route -T 2 show > > Routing tables > > Internet: > > DestinationGatewayFlags Refs Use Mtu Prio > > Iface > > 10.8.8.1 10.8.8.129 U
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
At a guess, route-to is confused by the same ip, but I haven't looked at the internals. Maybe try adding pair interfaces (with different addresses) to each rdomain, and you can use route-to to select between them. You already have default route set in each rdomain, so it will find its way from there. eg. # /etc/hostname.pair1 group pinternal rdomain 1 inet 10.255.1.1 255.255.255.0 !/sbin/route -T1 add 10.255.1.2 # /etc/hostname.pair11 group pinternal inet 10.255.1.2 255.255.255.0 patch pair1 # /etc/hostname.pair2 group pinternal rdomain 2 inet 10.255.2.1 255.255.255.0 !/sbin/route -T2 add 10.255.2.2 # /etc/hostname.pair12 group pinternal inet 10.255.2.2 255.255.255.0 patch pair2 # /etc/pf.conf ... pass on pinternal ... pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \ round-robin set prio (3,6) Have not tested exactly this, but similar to my current setup. Might not need the static routes, if the right pf magic is happening. -Phil On 28/11/18 8:18 am, Andrew Lemin wrote: Hi, So using the information Stuart and Andreas provided, I have been testing this (load balancing across multiple VPN servers to improve bandwidth). And I have multiple VPNs working properly within there own rdomains. * However 'route-to' is not load balancing with rdomains :( I have not been able to use the more simple solution you highlighted Stuart (using basic multipath routing), as the tunnel subnets overlap. So I think this is a potential bug, but I need your wisdom to verify my working first :) Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in unique rdomains (overlapping tunnel subnets) Configure sysctl's # Ensure '/etc/sysctl.conf' contains; net.inet.ip.forwarding=1# Permit forwarding (routing) of packets net.inet.ip.multipath=1 # 1=Enable IP multipath routing # Active sysctl's now without reboot sysctl net.inet.ip.forwarding=1 sysctl net.inet.ip.multipath=1 Pre-create tunX interfaces (in their respective rdomains) # Ensure '/etc/hostname.tun1' contains; up rdomain 1 # Ensure '/etc/hostname.tun2' contains; up rdomain 2 # Bring up the new tunX interfaces sh /etc/netstart fw1# ifconfig tun1 tun1: flags=8011 rdomain 1 mtu 1500 index 8 priority 0 llprio 3 groups: tun status: down fw1# ifconfig tun2 tun2: flags=8011 rdomain 2 mtu 1500 index 9 priority 0 llprio 3 groups: tun status: down # Start all SSL VPN tunnels (in unique VRF/rdomain's) /usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid /var/run/openvpn.tun1.pid --dev tun1 & /usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid /var/run/openvpn.tun2.pid --dev tun2 & ('auth-user-pass' updated in config files) Each openvpn tunnel should start using 'rtable 0' for the VPN's outer connection itself, but with each virtual tunnel TunX interface being placed into a unique routing domain. This results in the following tunX interface and rtable updates; fw1# ifconfig tun1 tun1: flags=8051 rdomain 1 mtu 1500 index 6 priority 0 llprio 3 groups: tun status: active inet 10.8.8.128 --> 10.8.8.1 netmask 0xff00 fw1# ifconfig tun2 tun2: flags=8051 rdomain 2 mtu 1500 index 7 priority 0 llprio 3 groups: tun status: active inet 10.8.8.129 --> 10.8.8.1 netmask 0xff00 fw1# route -T 1 show Routing tables Internet: DestinationGatewayFlags Refs Use Mtu Prio Iface 10.8.8.1 10.8.8.128 UH 00 - 8 tun1 10.8.8.128 10.8.8.128 UHl00 - 1 tun1 localhost localhost UHl00 32768 1 lo1 fw1# route -T 2 show Routing tables Internet: DestinationGatewayFlags Refs Use Mtu Prio Iface 10.8.8.1 10.8.8.129 UH 00 - 8 tun2 10.8.8.129 10.8.8.129 UHl00 - 1 tun2 localhost localhost UHl00 32768 1 lo2 # Test each tunnel - Ping the remote connected vpn peer within each rdomain ping -V 1 10.8.8.1 ping -V 2 10.8.8.1 Shows both VPN tunnels are working independently with the overlapping addressing :) # To be able to test each tunnel beyond the peer IP, add some default routes to the rdomains; route -T 1 -n add default 10.8.8.1 route -T 2 -n add default 10.8.8.1 # Test each tunnel - Ping beyond the connected peer ping -V 1 8.8.8.8 ping -V 2 8.8.8.8 Shows both VPN tunnels are definitely working independently with the overlapping addressing :) # Reverse routing - I have read in various places that PF's 'route-to' can be used for jumping rdomains's in the forward path of the session, but the reply packets need any matching route in the remote rdomain for the reply destination (the matching route is to ensure in the reply packet is passed through the routing table and gets into t
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Hi, So using the information Stuart and Andreas provided, I have been testing this (load balancing across multiple VPN servers to improve bandwidth). And I have multiple VPNs working properly within there own rdomains. * However 'route-to' is not load balancing with rdomains :( I have not been able to use the more simple solution you highlighted Stuart (using basic multipath routing), as the tunnel subnets overlap. So I think this is a potential bug, but I need your wisdom to verify my working first :) Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in unique rdomains (overlapping tunnel subnets) Configure sysctl's # Ensure '/etc/sysctl.conf' contains; net.inet.ip.forwarding=1# Permit forwarding (routing) of packets net.inet.ip.multipath=1 # 1=Enable IP multipath routing # Active sysctl's now without reboot sysctl net.inet.ip.forwarding=1 sysctl net.inet.ip.multipath=1 Pre-create tunX interfaces (in their respective rdomains) # Ensure '/etc/hostname.tun1' contains; up rdomain 1 # Ensure '/etc/hostname.tun2' contains; up rdomain 2 # Bring up the new tunX interfaces sh /etc/netstart fw1# ifconfig tun1 tun1: flags=8011 rdomain 1 mtu 1500 index 8 priority 0 llprio 3 groups: tun status: down fw1# ifconfig tun2 tun2: flags=8011 rdomain 2 mtu 1500 index 9 priority 0 llprio 3 groups: tun status: down # Start all SSL VPN tunnels (in unique VRF/rdomain's) /usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid /var/run/openvpn.tun1.pid --dev tun1 & /usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid /var/run/openvpn.tun2.pid --dev tun2 & ('auth-user-pass' updated in config files) Each openvpn tunnel should start using 'rtable 0' for the VPN's outer connection itself, but with each virtual tunnel TunX interface being placed into a unique routing domain. This results in the following tunX interface and rtable updates; fw1# ifconfig tun1 tun1: flags=8051 rdomain 1 mtu 1500 index 6 priority 0 llprio 3 groups: tun status: active inet 10.8.8.128 --> 10.8.8.1 netmask 0xff00 fw1# ifconfig tun2 tun2: flags=8051 rdomain 2 mtu 1500 index 7 priority 0 llprio 3 groups: tun status: active inet 10.8.8.129 --> 10.8.8.1 netmask 0xff00 fw1# route -T 1 show Routing tables Internet: DestinationGatewayFlags Refs Use Mtu Prio Iface 10.8.8.1 10.8.8.128 UH 00 - 8 tun1 10.8.8.128 10.8.8.128 UHl00 - 1 tun1 localhost localhost UHl00 32768 1 lo1 fw1# route -T 2 show Routing tables Internet: DestinationGatewayFlags Refs Use Mtu Prio Iface 10.8.8.1 10.8.8.129 UH 00 - 8 tun2 10.8.8.129 10.8.8.129 UHl00 - 1 tun2 localhost localhost UHl00 32768 1 lo2 # Test each tunnel - Ping the remote connected vpn peer within each rdomain ping -V 1 10.8.8.1 ping -V 2 10.8.8.1 Shows both VPN tunnels are working independently with the overlapping addressing :) # To be able to test each tunnel beyond the peer IP, add some default routes to the rdomains; route -T 1 -n add default 10.8.8.1 route -T 2 -n add default 10.8.8.1 # Test each tunnel - Ping beyond the connected peer ping -V 1 8.8.8.8 ping -V 2 8.8.8.8 Shows both VPN tunnels are definitely working independently with the overlapping addressing :) # Reverse routing - I have read in various places that PF's 'route-to' can be used for jumping rdomains's in the forward path of the session, but the reply packets need any matching route in the remote rdomain for the reply destination (the matching route is to ensure in the reply packet is passed through the routing table and gets into the PF processing, where PF can manage the return back to the default rdomain etc. But as I am using outbound NATing on the tunX interfaces, there is always a matching route for the reply traffic. And so a route for the internal subnet is not needed within rdomain 1 and 2. # Finally ensure '/etc/pf.conf' contains something like; if_ext = "em0" if_int = "em1" #CDR = 80 Down/20 Up queue out_ext on $if_ext flows 1024 bandwidth 18M max 19M qlimit 1024 default queue out_tun1 on tun1 flows 1024 bandwidth 17M max 18M qlimit 1024 default queue out_tun2 on tun2 flows 1024 bandwidth 17M max 18M qlimit 1024 default queue out_int on $if_srx flows 1024 bandwidth 74M max 78M qlimit 1024 default #MTU = 1500 match proto tcp all scrub (no-df max-mss 1460) set prio (2,5) match proto udp all scrub (no-df max-mss 1472) set prio (2,5) match proto icmp all scrub (no-df max-mss 1472) set prio 7 #NAT all outbound traffic match out on $if_ext from any to any nat-to ($if_ext) match out on tun1 from any to any nat-to (tun1) rtable 1 match out on tun
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
On 2018-09-11, Andrew Lemin wrote: > Hi list, > > I use an OpenVPN based internet access service (like NordVPN, AirVPN etc). > > The issue with these public VPN services, is the VPN servers are always > congested. The most I’ll get is maybe 10Mbits through one server. > > Local connection is a few hundred mbps.. > > So I had the idea of running multiple openvpn tunnels to different servers, > and load balancing outbound traffic across the tunnels. > > Sounds simple enough.. > > However every vpn tunnel uses the same subnet and nexthop gw. This of course > won’t work with normal routing. rtable/rdomain with openvpn might be a bit complex, I think it may need persist-tun and create the tun device in advance with the wanted rdomain. (you need the VPN to be in one, but the UDP/TCP connection in another). Assuming you are using tun (and so point-to-point connections) rather than tap, try one or other of these: - PF route-to and 'probability', IIRC it works to just use a junk address as long as the interface is correct ("route-to 10.10.10.10@tun0", "route-to 10.10.10.10@tun1"). - ECMP (net.inet.ip.multipath=1) and multiple route entries with the same priority. Use -ifp to set the interface ("route add default -priority 8 -ifp $interface $dest"). The "destination address" isn't really very relevant for routing on point-to-point interfaces (though current versions of OpenBSD do require that it matches the destination address on the interface, otherwise they won't allow the route to be added).
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Hi Andreas, Thanks for your reply. Sorry I should have been more clear. I know that rdomains are the correct method with overlapping addressing. The challenge is that I cannot figure out how to get openvpn to initialise it’s resulting tunX interface directly into the correct rdomain? You normally move interfaces to an rdomain with; ‘ifconfig em1 rdomain 1’ However is there a way I can get openvpn to do this at the time of setting up the interface? The problem is that you cannot just create the tunnel, and then move it over to an rdomain afterwards if there is already another conflicting tunnel in the default rdomain (as the tunnel just won’t come up due to the address conflict). I realise I could redesign it so that there is never a tunX in the default rdomain, so that tunnels can be setup in the default and then moved over. But this feels rather flawed/restricting and not the proper way of doing things? I would like to script the management of these tunnels, and so if there was a way of setting up the tunnel in its own rdomain directly that would be a lot more robust :) Thanks for your time. Andy. Sent from a teeny tiny keyboard, so please excuse typos. > On 11 Sep 2018, at 21:59, Andreas Krüger wrote: > > Maybe rdomains? > >> Den 11. sep. 2018 kl. 15.59 skrev Andrew Lemin : >> >> Hi list, >> >> I use an OpenVPN based internet access service (like NordVPN, AirVPN etc). >> >> The issue with these public VPN services, is the VPN servers are always >> congested. The most I’ll get is maybe 10Mbits through one server. >> >> Local connection is a few hundred mbps.. >> >> So I had the idea of running multiple openvpn tunnels to different servers, >> and load balancing outbound traffic across the tunnels. >> >> Sounds simple enough.. >> >> However every vpn tunnel uses the same subnet and nexthop gw. This of course >> won’t work with normal routing. >> >> So my question: >> How can I use rdomains or rtables with openvpn clients, so that each VPN is >> started in its own logical VRF? >> >> And is it then a case of just using PF to push the outbound packets into the >> various rdomains/rtables randomly (of course maintaining state)? LAN >> interface would be in the default rdomain/rtable.. >> >> My confusion is that an interface needs to be bound to the logical VRF, but >> the tunX interfaces are created dynamically by openvpn. >> >> So I am not sure how to configure this within hostname.tunX etc, or if I’m >> even approaching this correctly? >> >> Thanks, Andy. >> >
Re: PF Outbound traffic Load Balancing over multiple tun/openvpn interfaces/tunnels
Maybe rdomains? > Den 11. sep. 2018 kl. 15.59 skrev Andrew Lemin : > > Hi list, > > I use an OpenVPN based internet access service (like NordVPN, AirVPN etc). > > The issue with these public VPN services, is the VPN servers are always > congested. The most I’ll get is maybe 10Mbits through one server. > > Local connection is a few hundred mbps.. > > So I had the idea of running multiple openvpn tunnels to different servers, > and load balancing outbound traffic across the tunnels. > > Sounds simple enough.. > > However every vpn tunnel uses the same subnet and nexthop gw. This of course > won’t work with normal routing. > > So my question: > How can I use rdomains or rtables with openvpn clients, so that each VPN is > started in its own logical VRF? > > And is it then a case of just using PF to push the outbound packets into the > various rdomains/rtables randomly (of course maintaining state)? LAN > interface would be in the default rdomain/rtable.. > > My confusion is that an interface needs to be bound to the logical VRF, but > the tunX interfaces are created dynamically by openvpn. > > So I am not sure how to configure this within hostname.tunX etc, or if I’m > even approaching this correctly? > > Thanks, Andy. >