Hi,

So for completeness, I did some more testing with your suggestions.

First I tried using different nexthop’s in each of the interface-nexthop pairs 
in the route-to pool (as the next hop doesn’t really matter with p2p 
interfaces). And it did start to work! :)

But after some more testing it still wasn’t behaving very well, and was nearly 
always using the first tunX defined.. (I didn’t do much more testing). And if I 
ran a bunch of downloads to load test, they all ended up on tun1..


I had never heard of the ‘pair’ interfaces before, but as soon as I realised 
what they were I realised what a great idea! :)

So I created multiple pairs of ‘pair’ interfaces, one pair member in the 
default rdomain along with the internal interface, and the other pair member in 
the relevant vpn rdomain along with the tun interface. I.e. Creating an rdomain 
tunnel between domain 0 and each vpn rdomain.

This allowed me to configure unique none-overlapping subnets on each of the 
‘pair’ based p2p tunnels. 

And that of course simplified the ‘route-to’ statement to just be a list of 
different next-hops; one for each of the ‘pair’ interfaces in the default 
rdomain, for each vpn (without needing to define the interfaces as they are all 
in rdomain 0).

So without requiring PF to do any rdomain jumping/tunnelling (leaving rdomain 
tunnelling to the ‘pair’ interfaces), vpn load balancing is now working really 
very well. 

I can now utilise all the cpu cores on my router where I couldn’t before :) I 
have four real cores, so I’m running four OpenVPN tunnels :D


This appears to confirm that ‘route-to’ only works well for rdomain tunnelling 
when routing towards a single rdomain, per pf configuration line.

I guess Henning would need to pass his wisdom on this and decide if it’s a bug 
or just not supported yet.

Thanks guys :)


> On 28 Nov 2018, at 06:04, Tom Smyth <tom.sm...@wirelessconnect.eu> wrote:
> 
> Sorry  the "here" I was referring to earlier was  "here" as shown below
> https://lab.rickauer.com/post/2017/07/16/OpenBSD-rtables-and-rdomains
> 
> 
>> Howdy...
>> starting Openvpn in different rdomains works pretty well for us
>> 
>> a crude way of doing that ... is  to add the following line to the
>> bottom of your tun interface...
>> (starting openvpn in rdomain2 )
>> 
>> !/sbin/route -T 2 exec /usr/local/sbin/openvpn --config
>> /etc/openvpn2.conf & /usr/bin/false
>> 
>> we were using the L2 tunnels (not l3) ...  but this worked pretty well for us
>> 
>> you I think you  can use rcctl and set rtable also as described very well  
>> here
>> 
>> using symbolic links in /etc/rc.d/ you can create multiple openvpn
>> services, each with their own
>> settings...
>> I hope this helps
>> 
>> 
>> 
>>> On Wed, 28 Nov 2018 at 02:59, Philip Higgins <p...@pjh.id.au> wrote:
>>> 
>>> At a guess, route-to is confused by the same ip, but I haven't looked at 
>>> the internals.
>>> 
>>> Maybe try adding pair interfaces (with different addresses) to each rdomain,
>>> and you can use route-to to select between them.
>>> You already have default route set in each rdomain, so it will find its way 
>>> from there.
>>> 
>>> eg.
>>> 
>>> # /etc/hostname.pair1
>>> group pinternal
>>> rdomain 1
>>> inet 10.255.1.1 255.255.255.0
>>> !/sbin/route -T1 add <your internal subnet(s)> 10.255.1.2
>>> 
>>> # /etc/hostname.pair11
>>> group pinternal
>>> inet 10.255.1.2 255.255.255.0
>>> patch pair1
>>> 
>>> # /etc/hostname.pair2
>>> group pinternal
>>> rdomain 2
>>> inet 10.255.2.1 255.255.255.0
>>> !/sbin/route -T2 add <your internal subnet(s)> 10.255.2.2
>>> 
>>> # /etc/hostname.pair12
>>> group pinternal
>>> inet 10.255.2.2 255.255.255.0
>>> patch pair2
>>> 
>>> # /etc/pf.conf
>>> ...
>>> pass on pinternal
>>> ...
>>> pass in quick on { $if_int } to any route-to { 10.255.1.1, 10.255.2.1 } \
>>>  round-robin set prio (3,6)
>>> 
>>> Have not tested exactly this, but similar to my current setup.
>>> Might not need the static routes, if the right pf magic is happening.
>>> 
>>> 
>>> -Phil
>>> 
>>>>> On 28/11/18 8:18 am, Andrew Lemin wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> So using the information Stuart and Andreas provided, I have been testing
>>>> this (load balancing across multiple VPN servers to improve bandwidth).
>>>> And I have multiple VPNs working properly within there own rdomains.
>>>> 
>>>> * However 'route-to' is not load balancing with rdomains :(
>>>> 
>>>> I have not been able to use the more simple solution you highlighted Stuart
>>>> (using basic multipath routing), as the tunnel subnets overlap.
>>>> So I think this is a potential bug, but I need your wisdom to verify my
>>>> working first :)
>>>> 
>>>> Re; Load Balancing SSL VPNs using OpenBSD 6.4, with VPN TunX interfaces in
>>>> unique rdomains (overlapping tunnel subnets)
>>>> 
>>>> Configure sysctl's
>>>> # Ensure '/etc/sysctl.conf' contains;
>>>> net.inet.ip.forwarding=1        # Permit forwarding (routing) of packets
>>>> net.inet.ip.multipath=1         # 1=Enable IP multipath routing
>>>> 
>>>> # Active sysctl's now without reboot
>>>> sysctl net.inet.ip.forwarding=1
>>>> sysctl net.inet.ip.multipath=1
>>>> 
>>>> Pre-create tunX interfaces (in their respective rdomains)
>>>> # Ensure '/etc/hostname.tun1' contains;
>>>> up
>>>> rdomain 1
>>>> 
>>>> # Ensure '/etc/hostname.tun2' contains;
>>>> up
>>>> rdomain 2
>>>> 
>>>> # Bring up the new tunX interfaces
>>>> sh /etc/netstart
>>>> 
>>>> fw1# ifconfig
>>>> tun1
>>>> 
>>>> tun1: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 1 mtu 1500
>>>>         index 8 priority 0 llprio 3
>>>>         groups: tun
>>>>         status: down
>>>> fw1# ifconfig tun2
>>>> tun2: flags=8011<UP,POINTOPOINT,MULTICAST> rdomain 2 mtu 1500
>>>>         index 9 priority 0 llprio 3
>>>>         groups: tun
>>>>         status: down
>>>> 
>>>> # Start all SSL VPN tunnels (in unique VRF/rdomain's)
>>>> /usr/local/sbin/openvpn --config ./ch70.nordvpn.com.udp.ovpn --writepid
>>>> /var/run/openvpn.tun1.pid --dev tun1 &
>>>> /usr/local/sbin/openvpn --config ./ch71.nordvpn.com.udp.ovpn --writepid
>>>> /var/run/openvpn.tun2.pid --dev tun2 &
>>>> ('auth-user-pass' updated in config files)
>>>> 
>>>> Each openvpn tunnel should start using 'rtable 0' for the VPN's outer
>>>> connection itself, but with each virtual tunnel TunX interface being placed
>>>> into a unique routing domain.
>>>> 
>>>> This results in the following tunX interface and rtable updates;
>>>> fw1# ifconfig
>>>> tun1
>>>> 
>>>> tun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 1 mtu 1500
>>>>         index 6 priority 0 llprio 3
>>>>         groups: tun
>>>>         status: active
>>>>         inet 10.8.8.128 --> 10.8.8.1 netmask 0xffffff00
>>>> fw1# ifconfig tun2
>>>> tun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> rdomain 2 mtu 1500
>>>>         index 7 priority 0 llprio 3
>>>>         groups: tun
>>>>         status: active
>>>>         inet 10.8.8.129 --> 10.8.8.1 netmask 0xffffff00
>>>> fw1# route -T 1 show
>>>> Routing tables
>>>> Internet:
>>>> Destination        Gateway            Flags   Refs      Use   Mtu  Prio
>>>> Iface
>>>> 10.8.8.1           10.8.8.128         UH         0        0     -     8
>>>> tun1
>>>> 10.8.8.128         10.8.8.128         UHl        0        0     -     1
>>>> tun1
>>>> localhost          localhost          UHl        0        0 32768     1
>>>> lo1
>>>> fw1# route -T 2 show
>>>> Routing tables
>>>> Internet:
>>>> Destination        Gateway            Flags   Refs      Use   Mtu  Prio
>>>> Iface
>>>> 10.8.8.1           10.8.8.129         UH         0        0     -     8
>>>> tun2
>>>> 10.8.8.129         10.8.8.129         UHl        0        0     -     1
>>>> tun2
>>>> localhost          localhost          UHl        0        0 32768     1
>>>> lo2
>>>> 
>>>> # Test each tunnel - Ping the remote connected vpn peer within each rdomain
>>>> ping -V 1 10.8.8.1
>>>> ping -V 2 10.8.8.1
>>>> 
>>>> Shows both VPN tunnels are working independently with the overlapping
>>>> addressing :)
>>>> 
>>>> # To be able to test each tunnel beyond the peer IP, add some default
>>>> routes to the rdomains;
>>>> route -T 1 -n add default 10.8.8.1
>>>> route -T 2 -n add default 10.8.8.1
>>>> 
>>>> # Test each tunnel - Ping beyond the connected peer
>>>> ping -V 1 8.8.8.8
>>>> ping -V 2 8.8.8.8
>>>> 
>>>> Shows both VPN tunnels are definitely working independently with the
>>>> overlapping addressing :)
>>>> 
>>>> # Reverse routing - I have read in various places that PF's 'route-to' can
>>>> be used for jumping rdomains's in the forward path of the session, but the
>>>> reply packets need any matching route in the remote rdomain for the reply
>>>> destination (the matching route is to ensure in the reply packet is passed
>>>> through the routing table and gets into the PF processing, where PF can
>>>> manage the return back to the default rdomain etc.
>>>> 
>>>> But as I am using outbound NATing on the tunX interfaces, there is always a
>>>> matching route for the reply traffic. And so a route for the internal
>>>> subnet is not needed within rdomain 1 and 2.
>>>> 
>>>> 
>>>> # Finally ensure '/etc/pf.conf' contains something like;
>>>> if_ext = "em0"
>>>> if_int = "em1"
>>>> 
>>>> #CDR = 80 Down/20 Up
>>>> queue out_ext on $if_ext flows 1024 bandwidth 18M max 19M qlimit 1024
>>>> default
>>>> queue out_tun1 on tun1 flows 1024 bandwidth 17M max 18M qlimit 1024 default
>>>> queue out_tun2 on tun2 flows 1024 bandwidth 17M max 18M qlimit 1024 default
>>>> queue out_int on $if_srx flows 1024 bandwidth 74M max 78M qlimit 1024
>>>> default
>>>> 
>>>> #MTU = 1500
>>>> match proto tcp all scrub (no-df max-mss 1460) set prio (2,5)
>>>> match proto udp all scrub (no-df max-mss 1472) set prio (2,5)
>>>> match proto icmp all scrub (no-df max-mss 1472) set prio 7
>>>> 
>>>> #NAT all outbound traffic
>>>> match out on $if_ext from any to any nat-to ($if_ext)
>>>> match out on tun1 from any to any nat-to (tun1) rtable 1
>>>> match out on tun2 from any to any nat-to (tun2) rtable 2
>>>> 
>>>> #Allow outbound traffic on egress for vpn tunnel setup etc
>>>> pass out quick on { $if_ext } from self to any set prio (3,6)
>>>> 
>>>> #Load balance outbound traffic from internal network across tun1 and tun2 -
>>>> THIS IS NOT WORKING - IT ONLY USES FIRST TUNNEL
>>>> pass in quick on { $if_int } to any route-to { (tun1 10.8.8.1), (tun2
>>>> 10.8.8.1) } round-robin set prio (3,6)
>>>> 
>>>> #Allow outbound traffic over vpn tunnels
>>>> pass out quick on tun1 to any set prio (3,6)
>>>> pass out quick on tun2 to any set prio (3,6)
>>>> 
>>>> 
>>>> # Verify which tunnels are being used
>>>> systat ifstat
>>>> 
>>>> *This command shows that all the traffic is only flowing over the first
>>>> tun1 interface, and the second tun2 is never ever used.*
>>>> 
>>>> 
>>>> # NB; I have tried with and without 'set state-policy if-bound'.
>>>> 
>>>> I have tried all the load balancing policies; round-robin, random,
>>>> least-states and source-hash
>>>> 
>>>> If I change the 'route-to' pool to "{ (tun2 10.8.8.1), (tun1 10.8.8.1) }",
>>>> then only tun2 is used instead.. :(
>>>> 
>>>> So 'route-to' seems to only use the first tunnel in the pool.
>>>> 
>>>> Any advice on what is going wrong here. I am wondering if I am falling
>>>> victim to some processing-order issue with PF, or if this is a real bug?
>>>> 
>>>> Thanks, Andy.
> 

Reply via email to