On Wed, Apr 14, 2021 at 08:24:11PM -0700, David Ahern wrote:
> On 4/14/21 12:33 AM, Pavel Balaev wrote:
> >>
> >> This should work the same for IPv6.
> > I wanted to add IPv6 support after IPv4 will be approved,
> > anyway no problem, will add IPv6 in next version
> >> And please add test cases un
On 4/14/21 12:33 AM, Pavel Balaev wrote:
>>
>> This should work the same for IPv6.
> I wanted to add IPv6 support after IPv4 will be approved,
> anyway no problem, will add IPv6 in next version
>> And please add test cases under tools/testing/selftests/net.
> This feature cannot be tested whithin
On Tue, Apr 13, 2021 at 04:15:21PM -0700, David Miller wrote:
> From: Balaev Pavel
> Date: Tue, 13 Apr 2021 14:55:04 +0300
>
> > @@ -222,6 +230,9 @@ struct netns_ipv4 {
> > #ifdef CONFIG_IP_ROUTE_MULTIPATH
> > u8 sysctl_fib_multipath_use_neigh;
> > u8 sysctl_fib_multipath_hash_policy;
>
On Tue, Apr 13, 2021 at 08:28:52PM -0700, David Ahern wrote:
> On 4/13/21 4:55 AM, Balaev Pavel wrote:
> > Ability for a user to assign seed value to multipath route hashes.
> > Now kernel uses random seed value to prevent hash-flooding DoS attacks;
> > however, it disables some use cases, f.e:
> >
On 4/13/21 4:55 AM, Balaev Pavel wrote:
> Ability for a user to assign seed value to multipath route hashes.
> Now kernel uses random seed value to prevent hash-flooding DoS attacks;
> however, it disables some use cases, f.e:
>
> +---++--+++
> | |-eth0---| FW
From: Balaev Pavel
Date: Tue, 13 Apr 2021 14:55:04 +0300
> @@ -222,6 +230,9 @@ struct netns_ipv4 {
> #ifdef CONFIG_IP_ROUTE_MULTIPATH
> u8 sysctl_fib_multipath_use_neigh;
> u8 sysctl_fib_multipath_hash_policy;
> + int sysctl_fib_multipath_hash_seed;
> + struct multipath_seed_
Ability for a user to assign seed value to multipath route hashes.
Now kernel uses random seed value to prevent hash-flooding DoS attacks;
however, it disables some use cases, f.e:
+---++--+++
| |-eth0---| FW0 |---eth0-||
| |+--+
Ability for user to set seed value for multipath routing hashes.
Now kernel uses random seed value:
this is done to prevent hash-flooding DoS attacks,
but it breaks some scenario, f.e:
+---++--+++
| |-eth0---| FW0 |---eth0
On Fri, 9 Apr 2021 16:52:05 +0300 Balaev Pavel wrote:
> Hello, this patch adds ability for user to set seed value for
nit: please drop the 'Hello' and use imperative form to describe
the commit.
> multipath routing hashes. Now kernel uses random seed value:
> this is d
Hello, this patch adds ability for user to set seed value for
multipath routing hashes. Now kernel uses random seed value:
this is done to prevent hash-flooding DoS attacks,
but it breaks some scenario, f.e:
+---++--+++
| |-eth0---| FW0 |---eth0
On 9/1/20 4:40 AM, mastertheknife wrote:
>
> P.S: while reading the relevant code in the kernel, i think i spotted
> some mistake in net/ipv4/route.c, in function "update_or_create_fnhe".
> It looks like it loops over all the exceptions for the nexthop entry,
> but always overwriting the first (an
Hello David,
A quick correction; The issue is not solved, it was a mistake in my
testing. The issue is still there.
Kfir
On Tue, Sep 1, 2020 at 1:40 PM mastertheknife wrote:
>
> Hello David.
>
> I was able to solve it while troubleshooting some fragmentation issue.
> The VTI interfaces had MTU
Hello David.
I was able to solve it while troubleshooting some fragmentation issue.
The VTI interfaces had MTU of 1480 by default. I reduced to them to
the real PMTUD (1366) and now its all working just fine.
I am not sure how its related and why, but seems like it solved the issue.
P.S: while re
Hello David,
It's on a production system, vmbr2 is a bridge with eth.X VLAN
interface inside for the connectivity on that 252.0/24 network. vmbr2
has address 192.168.252.5 in that case
192.168.252.250 and 192.168.252.252 are CentOS8 LXCs on another host,
with libreswan inside for any/any IPSECs wi
On 8/12/20 6:37 AM, mastertheknife wrote:
> Hello David,
>
> I tried and it seems i can reproduce it:
>
> # Create test NS
> root@host:~# ip netns add testns
> # Create veth pair, veth0 in host, veth1 in NS
> root@host:~# ip link add veth0 type veth peer name veth1
> root@host:~# ip link set veth
Hello David,
I tried and it seems i can reproduce it:
# Create test NS
root@host:~# ip netns add testns
# Create veth pair, veth0 in host, veth1 in NS
root@host:~# ip link add veth0 type veth peer name veth1
root@host:~# ip link set veth1 netns testns
# Configure veth1 (NS)
root@host:~# ip netns
On 8/3/20 12:39 PM, mastertheknife wrote:
> In summary: It seems that it doesn't matter who is the nexthop. If the
> ICMP response isn't from the nexthop, it'll be rejected.
> About why i couldn't reproduce this outside LXC, i don't know yet but
> i will keep trying to figure this out.
do you have
Hi David,
I found something that can shed some light on the issue.
The issue only happens if the ICMP response doesn't come from the first nexthop.
In my case, both nexthops are linux routers, and they are the ones
generating the ICMP (because of IPSEC next). This is what I meant
earlier,
that the
On 8/3/20 8:24 AM, mastertheknife wrote:
> Hi David,
>
> In this case, both paths are in the same layer2 network, there is no
> symmetric multi-path routing.
> If original message takes path 1, ICMP response will come from path 1
> If original message takes path 2, ICMP response will come from pat
,
Kfir
On Mon, Aug 3, 2020 at 4:32 PM David Ahern wrote:
>
> On 8/3/20 5:14 AM, mastertheknife wrote:
> > What seems to be happening, is that when multipath routing is used
> > inside LXC (or any network namespace), the kernel doesn't generate a
> > routing exception
On 8/3/20 5:14 AM, mastertheknife wrote:
> What seems to be happening, is that when multipath routing is used
> inside LXC (or any network namespace), the kernel doesn't generate a
> routing exception to force the lower MTU.
> I believe this is a bug inside the kernel.
>
Kno
Hi,
I have observed that PMTUD (Path MTU discovery) is broken using
multipath routing inside a network namespace. This breaks TCP, because
it keeps trying to send oversized packets.
Observed on kernel 5.4.44, other kernels weren't tested. However i
went through net/ipv4/route.c and ha
Create a topology with two hosts, each directly connected to a different
router. Both routers are connected using two links, enabling multipath
routing.
Test IPv4 and IPv6 ping using default MTU and large MTU.
Signed-off-by: Ido Schimmel
---
.../selftests/net/forwarding/router_multipath.sh
Create a topology with two hosts, each directly connected to a different
router. Both routers are connected using two links, enabling multipath
routing.
Test IPv4 and IPv6 ping using default MTU and large MTU.
Signed-off-by: Ido Schimmel
---
.../selftests/net/forwarding/router_multipath.sh
Create a topology with two hosts, each directly connected to a different
router. Both routers are connected using two links, enabling multipath
routing.
Test IPv4 and IPv6 ping using default MTU and large MTU.
Signed-off-by: Ido Schimmel
---
tools/testing/selftests/forwarding/Makefile
On 11/30/2016 05:04 AM, Tom Herbert wrote:
> This is a lot of code to make ECMP work better. Can you be more
> specific as to what the "issues" are? Assuming this is just the
> transient packet reorder that happens in one link flap I am wondering
> if this complexity is justified.
Unconsistent has
On 12/01/2016 06:55 PM, Roopa Prabhu wrote:
> I think Its best for it to be a global setting, and thats why sysctl
> seems like the best way (unless there are other ways to set this
> globally via rtnetlink). If it helps, most hw switch vendors
> supporting this feature also provide a globally tuna
From: Hannes Frederic Sowa
Date: Wed, 30 Nov 2016 14:12:48 +0100
> David, one question: do you remember if you measured with linked lists
> at that time or also with arrays. I actually would expect small arrays
> that entirely fit into cachelines to be actually faster than our current
> approach,
On Tue, Nov 29, 2016 at 11:56 PM, David Lebrun
wrote:
> On 11/30/2016 04:52 AM, Hannes Frederic Sowa wrote:
>
>> Also please convert the sysctl to a netlink attribute if you pursue this
>> because if I change the sysctl while my quagga is hammering the routing
>> table I would like to know which
On Wed, Nov 30, 2016, at 20:49, David Miller wrote:
> From: David Lebrun
> Date: Tue, 29 Nov 2016 18:15:18 +0100
>
> > When multiple nexthops are available for a given route, the routing engine
> > chooses a nexthop by computing the flow hash through get_hash_from_flowi6
> > and by taking that va
From: David Lebrun
Date: Tue, 29 Nov 2016 18:15:18 +0100
> When multiple nexthops are available for a given route, the routing engine
> chooses a nexthop by computing the flow hash through get_hash_from_flowi6
> and by taking that value modulo the number of nexthops. The resulting value
> indexes
On Mon, Nov 28, 2016, at 21:32, David Miller wrote:
> From: David Lebrun
> Date: Mon, 28 Nov 2016 21:16:19 +0100
>
> > The advantage of my solution over RFC2992 is lowest possible disruption
> > and equal rebalancing of affected flows. The disadvantage is the lookup
> > complexity of O(log n) vs
On 11/30/2016 04:52 AM, Hannes Frederic Sowa wrote:
> In the worst case this causes 2GB (order 19) allocations (x == 31) to
> happen in GFP_ATOMIC (due to write lock) context and could cause update
> failures to the routing table due to fragmentation. Are you sure the
> upper limit of 31 is reasona
On Tue, Nov 29, 2016 at 9:15 AM, David Lebrun wrote:
> When multiple nexthops are available for a given route, the routing engine
> chooses a nexthop by computing the flow hash through get_hash_from_flowi6
> and by taking that value modulo the number of nexthops. The resulting value
> indexes the
Hi,
On Tue, Nov 29, 2016, at 18:15, David Lebrun wrote:
> When multiple nexthops are available for a given route, the routing
> engine
> chooses a nexthop by computing the flow hash through get_hash_from_flowi6
> and by taking that value modulo the number of nexthops. The resulting
> value
> index
When multiple nexthops are available for a given route, the routing engine
chooses a nexthop by computing the flow hash through get_hash_from_flowi6
and by taking that value modulo the number of nexthops. The resulting value
indexes the nexthop to select. This method causes issues when a new nextho
On 11/28/2016 09:32 PM, David Miller wrote:
> When I was working on the routing cache removal in ipv4 I compared
> using a stupid O(1) hash lookup of the FIB entries vs. the O(log n)
> fib_trie stuff actually in use.
>
> It did make a difference.
>
> This is a lookup that can be invoked 20 millio
From: David Lebrun
Date: Mon, 28 Nov 2016 21:16:19 +0100
> The advantage of my solution over RFC2992 is lowest possible disruption
> and equal rebalancing of affected flows. The disadvantage is the lookup
> complexity of O(log n) vs O(1). Although from a theoretical viewpoint
> O(1) is obviously
On 11/28/2016 05:22 PM, David Miller wrote:
> Thanks for trying to solve this problem.
>
> But we really don't want this to be Kconfig gated. If we decide to
> support this it should be a run-time selectable option. Every
> distribution on the planet is going to turn your Kconfig option on, so
>
From: David Lebrun
Date: Thu, 24 Nov 2016 20:59:16 +0100
> When multiple nexthops are available for a given route, the routing engine
> chooses a nexthop by computing the flow hash through get_hash_from_flowi6
> and by taking that value modulo the number of nexthops. The resulting value
> indexes
When multiple nexthops are available for a given route, the routing engine
chooses a nexthop by computing the flow hash through get_hash_from_flowi6
and by taking that value modulo the number of nexthops. The resulting value
indexes the nexthop to select. This method causes issues when a new nextho
From: Peter Nørlund
Date: Wed, 30 Sep 2015 10:12:20 +0200
> When the routing cache was removed in 3.6, the IPv4 multipath algorithm
> changed
> from more or less being destination-based into being quasi-random per-packet
> scheduling. This increases the risk of out-of-order packets and makes it
When the routing cache was removed in 3.6, the IPv4 multipath algorithm changed
from more or less being destination-based into being quasi-random per-packet
scheduling. This increases the risk of out-of-order packets and makes it
impossible to use multipath together with anycast services.
This pat
From: Peter Nørlund
> Sent: 29 September 2015 12:29
...
> As for using L4 hashing with anycast, CloudFlare apparently does L4
> hashing - they could have disabled it, but they didn't. Besides,
> analysis of my own load balancers showed that only one in every
> 500,000,000 packets is fragmented. And
On Mon, 28 Sep 2015 19:55:41 -0700 (PDT)
David Miller wrote:
> From: David Miller
> Date: Mon, 28 Sep 2015 19:33:55 -0700 (PDT)
>
> > From: Peter Nørlund
> > Date: Wed, 23 Sep 2015 21:49:35 +0200
> >
> >> When the routing cache was removed in 3.6, the IPv4 multipath
> >> algorithm changed fro
From: David Miller
Date: Mon, 28 Sep 2015 19:33:55 -0700 (PDT)
> From: Peter Nørlund
> Date: Wed, 23 Sep 2015 21:49:35 +0200
>
>> When the routing cache was removed in 3.6, the IPv4 multipath algorithm
>> changed
>> from more or less being destination-based into being quasi-random per-packet
>
From: Peter Nørlund
Date: Wed, 23 Sep 2015 21:49:35 +0200
> When the routing cache was removed in 3.6, the IPv4 multipath algorithm
> changed
> from more or less being destination-based into being quasi-random per-packet
> scheduling. This increases the risk of out-of-order packets and makes it
When the routing cache was removed in 3.6, the IPv4 multipath algorithm changed
from more or less being destination-based into being quasi-random per-packet
scheduling. This increases the risk of out-of-order packets and makes it
impossible to use multipath together with anycast services.
This pat
When the routing cache was removed in 3.6, the IPv4 multipath algorithm changed
from more or less being destination-based into being quasi-random per-packet
scheduling. This increases the risk of out-of-order packets and makes it
impossible to use multipath together with anycast services.
This pat
; need to optimize for that case. Albeit, it would be nice if fragments
> of packet followed same path, but the would require devices to not do
> L4 hash over ports when MF is set-- I don't know if anyone does that
> (I have been meaning to add that to stack).
+1 for solving this at hash
On Fri, Aug 28, 2015 at 1:00 PM, wrote:
> From: Peter Nørlund
>
> This patch adds L3 and L4 hash-based multipath routing, selectable on a
> per-route basis with the reintroduced RTA_MP_ALGO attribute. The default is
> now RT_MP_ALG_L3_HASH.
>
> Signed-off-by: Peter Nørlund
On Sun, Aug 30, 2015 at 2:28 PM, Peter Nørlund wrote:
> On Sat, 29 Aug 2015 13:59:08 -0700
> Tom Herbert wrote:
>
>> On Sat, Aug 29, 2015 at 1:46 PM, David Miller
>> wrote:
>> > From: Peter Nørlund
>> > Date: Sat, 29 Aug 2015 22:31:15 +0200
>> >
>> >> On Sat, 29 Aug 2015 13:14:29 -0700 (PDT)
>>
On Sat, 29 Aug 2015 13:59:08 -0700
Tom Herbert wrote:
> On Sat, Aug 29, 2015 at 1:46 PM, David Miller
> wrote:
> > From: Peter Nørlund
> > Date: Sat, 29 Aug 2015 22:31:15 +0200
> >
> >> On Sat, 29 Aug 2015 13:14:29 -0700 (PDT)
> >> David Miller wrote:
> >>
> >>> From: p...@ordbogen.com
> >>> D
On Sat, Aug 29, 2015 at 1:46 PM, David Miller wrote:
> From: Peter Nørlund
> Date: Sat, 29 Aug 2015 22:31:15 +0200
>
>> On Sat, 29 Aug 2015 13:14:29 -0700 (PDT)
>> David Miller wrote:
>>
>>> From: p...@ordbogen.com
>>> Date: Fri, 28 Aug 2015 22:00:47 +0200
>>>
>>> > When the routing cache was re
On Sat, Aug 29, 2015 at 1:46 PM, David Miller wrote:
> From: Peter Nørlund
> Date: Sat, 29 Aug 2015 22:31:15 +0200
>
>> On Sat, 29 Aug 2015 13:14:29 -0700 (PDT)
>> David Miller wrote:
>>
>>> From: p...@ordbogen.com
>>> Date: Fri, 28 Aug 2015 22:00:47 +0200
>>>
>>> > When the routing cache was re
From: Peter Nørlund
Date: Sat, 29 Aug 2015 22:31:15 +0200
> On Sat, 29 Aug 2015 13:14:29 -0700 (PDT)
> David Miller wrote:
>
>> From: p...@ordbogen.com
>> Date: Fri, 28 Aug 2015 22:00:47 +0200
>>
>> > When the routing cache was removed in 3.6, the IPv4 multipath
>> > algorithm changed from mor
On Sat, 29 Aug 2015 13:14:29 -0700 (PDT)
David Miller wrote:
> From: p...@ordbogen.com
> Date: Fri, 28 Aug 2015 22:00:47 +0200
>
> > When the routing cache was removed in 3.6, the IPv4 multipath
> > algorithm changed from more or less being destination-based into
> > being quasi-random per-packe
From: p...@ordbogen.com
Date: Fri, 28 Aug 2015 22:00:47 +0200
> When the routing cache was removed in 3.6, the IPv4 multipath algorithm
> changed
> from more or less being destination-based into being quasi-random per-packet
> scheduling. This increases the risk of out-of-order packets and makes
From: Peter Nørlund
This patch adds L3 and L4 hash-based multipath routing, selectable on a
per-route basis with the reintroduced RTA_MP_ALGO attribute. The default is
now RT_MP_ALG_L3_HASH.
Signed-off-by: Peter Nørlund
---
include/net/ip_fib.h | 22 -
include/uapi
ction entirely with L4 hashing
- Handle newly added sysctl ignore_routes_with_linkdown
Best Regards,
Peter Nørlund
Peter Nørlund (3):
ipv4: Lock-less per-packet multipath
ipv4: L3 and L4 hash-based multipath routing
ipv4: ICMP packet inspection for L3 multipath
include/net/ip_
On Thu, 18 Jun 2015 15:52:22 -0700
Alexander Duyck wrote:
>
>
> On 06/17/2015 01:08 PM, Peter Nørlund wrote:
> > This patch adds L3 and L4 hash-based multipath routing, selectable
> > on a per-route basis with the reintroduced RTA_MP_ALGO attribute.
> > The defau
On 06/17/2015 01:08 PM, Peter Nørlund wrote:
This patch adds L3 and L4 hash-based multipath routing, selectable on a
per-route basis with the reintroduced RTA_MP_ALGO attribute. The default is
now RT_MP_ALG_L3_HASH.
Signed-off-by: Peter Nørlund
---
include/net/ip_fib.h | 4
patch is accepted, a follow-up patch to iproute2 will also be
submitted.
Best regards,
Peter Nørlund
Peter Nørlund (3):
ipv4: Lock-less per-packet multipath
ipv4: L3 and L4 hash-based multipath routing
ipv4: ICMP packet inspection for multipath
include/net/ip_fib.h
This patch adds L3 and L4 hash-based multipath routing, selectable on a
per-route basis with the reintroduced RTA_MP_ALGO attribute. The default is
now RT_MP_ALG_L3_HASH.
Signed-off-by: Peter Nørlund
---
include/net/ip_fib.h | 4 ++-
include/net/route.h| 5 ++--
include
> -Messaggio originale-
> Da: Chuck Ebbert [mailto:[EMAIL PROTECTED]
>
> Giampaolo Tomassoni wrote:
> >
> > default src 1.2.1.6
> > nexthop via 1.2.2.254 dev atm0 weight 1
> > nexthop via 1.1.2.254 dev atm1 weight 1
> >
>
> When I tri
Giampaolo Tomassoni wrote:
>
> default src 1.2.1.6
> nexthop via 1.2.2.254 dev atm0 weight 1
> nexthop via 1.1.2.254 dev atm1 weight 1
>
When I tried this I found that weight 1 didn't work -- all traffic
went out one interface until I
Hi there,
this is yet-another-question about multipath routing.
I would like to do load-balancing on traffic outgoing through two DSL lines.
I would prefer to increase the bandwidth of each connection instead of just
the total one, thereby I guess I'm looking for a packet-based mult
On 06-03-2007 21:36, Tore Anderson wrote:
>
> Hello list,
>
> I've been trying to figure out how to make equal-cost multipath
> routing work, with no luck. Asked on the LARTC list with no success,
It is probably one of the most often asked questions
on the LARTC, so I&
Hello list,
I've been trying to figure out how to make equal-cost multipath
routing work, with no luck. Asked on the LARTC list with no success,
and attempts to contact the two authors privately yielded one bounce
while the other declined to answer in private and pointed me to
From: Eric Dumazet <[EMAIL PROTECTED]>
Date: Sat, 10 Feb 2007 12:11:31 +0100
> [PATCH] : NET : restore multipath routing after rt_next changes
>
> I forgot to test build this part of the networking code... Sorry guys.
> This patch renames u.rt_next to u.dst.rt_next
>
6: error: 'union ' has no member named
'rt_next'
net/ipv4/multipath_random.c:76: warning: assignment makes pointer from integer
without a cast
net/ipv4/multipath_random.c:92: error: 'union ' has no member named
'rt_next'
[PATCH] : NET : restore multipath
v eth1 weight 100 nexthop via
192.168.0.100 dev eth1 weight 100
It it doesnt work. It always uses the last for all download-users (around a few
hundreds)
Did you enable IP_ROUTE_MULTIPATH_CACHED? If so please disable it
since it breaks multipath routing for forwarded traffic.
Hi ,
This was
72 matches
Mail list logo