Re: [Bloat] fq_codel on bridge throughput test/config

2019-01-04 Thread Stephen Hemminger
On Fri, 4 Jan 2019 12:33:28 -0800
Dev  wrote:

> Okay, thanks to some help from the list, I’ve configured a transparent bridge 
> running fq_codel which works for multiple subnet traffic. Here’s my setup:
> 
> Machine A ——— 192.168.10.200 — — bridge fq_codel machine B —— laptop C 
> 192.168.10.150
> Machine D ——— 192.168.3.50 — —| 
> 
> On Machine A:
> 
> straight gigE interface 192.168.10.200
> 
> Bridge Machine B: enp3s0 mgmt interface
>   enp2s0 bridge interface 1
>   enp1s0 bridge interface 2
>   br0 bridge for 1 and 2
>   
>   # The loopback network interface 
>   auto lo br0 
>   iface lo inet loopback 
> 
>   # The primary network interface 
>   allow-hotplug enp3s0 
>   iface enp3s0 inet static 
>address 172.16.0.5/24 
>gateway 172.16.0.5 
>   dns-nameservers 8.8.8.8
> 
>   iface enp1s0 inet manual 
>tc qdisc add dev enp1s0 root fq_codel 
> 
>iface enp2s0 inet manual 
>   tc qdisc add dev enp2s0 root fq_codel 
> 
># Bridge setup 
>   iface br0 inet static 
>   bridge_ports enp1s0 enp2s0 
>   address 192.168.3.75 
>   broadcast 192.168.3.255 
>   netmask 255.255.255.0 
>   gateway 192.168.3
> 
> note: I still have to run this command later, will troubleshoot at some point 
> (unless you have suggestions to make it work):
> 
> tc qdisc add dev enp1s0 root fq_codel
> 
> To start, my pings from Machine A to Laptop C were around 0.75 msec, then I 
> flooded the link from Machine A to Laptop C using:
> 
> dd if=/dev/urandom | ssh user@192.168.10.150 dd of=/dev/null
> 
> Then my pings went up to around 170 msec. Once I enabled fq_codel on the 
> bridge machine B, my pings dropped to around 10 msec.
> 
> Hope this helps someone else working on a similar setup.
> 
> - Dev
> 
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

Applying a qdisc to a bridge device only impacts the local traffic
going to that bridge (ie br0). It has no impact on traffic transiting
through the bridge. Since normally bridge pseudo device is queueless
putting qdisc on br0 has no impact. In other words packets being transmitted
on br0 go direct to the underlying device, therefore even if you put a
qdisc on br0 it isn't going to do what you expect (unless you layer some
rate control into the stack).
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] fq_codel on bridge throughput test/config

2019-01-04 Thread Toke Høiland-Jørgensen
Dev  writes:

> Okay, thanks to some help from the list, I’ve configured a transparent bridge 
> running fq_codel which works for multiple subnet traffic. Here’s my setup:
>
> Machine A ——— 192.168.10.200 — — bridge fq_codel machine B —— laptop C 
> 192.168.10.150
> Machine D ——— 192.168.3.50 — —| 
>
> On Machine A:
>
> straight gigE interface 192.168.10.200
>
> Bridge Machine B: enp3s0 mgmt interface
>   enp2s0 bridge interface 1
>   enp1s0 bridge interface 2
>   br0 bridge for 1 and 2
>   
>   # The loopback network interface 
>   auto lo br0 
>   iface lo inet loopback 
>
>   # The primary network interface 
>   allow-hotplug enp3s0 
>   iface enp3s0 inet static 
>address 172.16.0.5/24 
>gateway 172.16.0.5 
>   dns-nameservers 8.8.8.8
>
>   iface enp1s0 inet manual 
>tc qdisc add dev enp1s0 root fq_codel 
>
>iface enp2s0 inet manual 
>   tc qdisc add dev enp2s0 root fq_codel 
>
># Bridge setup 
>   iface br0 inet static 
>   bridge_ports enp1s0 enp2s0 
>   address 192.168.3.75 
>   broadcast 192.168.3.255 
>   netmask 255.255.255.0 
>   gateway 192.168.3
>
> note: I still have to run this command later, will troubleshoot at
> some point (unless you have suggestions to make it work):

You can try setting the default qdisc to fq_codel; put:

net.core.default_qdisc=fq_codel

in /etc/sysctl.conf

-Toke
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


[Bloat] fq_codel on bridge throughput test/config

2019-01-04 Thread Dev
Okay, thanks to some help from the list, I’ve configured a transparent bridge 
running fq_codel which works for multiple subnet traffic. Here’s my setup:

Machine A ——— 192.168.10.200 — — bridge fq_codel machine B —— laptop C 
192.168.10.150
Machine D ——— 192.168.3.50 — —| 

On Machine A:

straight gigE interface 192.168.10.200

Bridge Machine B: enp3s0 mgmt interface
enp2s0 bridge interface 1
enp1s0 bridge interface 2
br0 bridge for 1 and 2

# The loopback network interface 
auto lo br0 
iface lo inet loopback 

# The primary network interface 
allow-hotplug enp3s0 
iface enp3s0 inet static 
 address 172.16.0.5/24 
 gateway 172.16.0.5 
dns-nameservers 8.8.8.8

iface enp1s0 inet manual 
 tc qdisc add dev enp1s0 root fq_codel 

 iface enp2s0 inet manual 
tc qdisc add dev enp2s0 root fq_codel 

 # Bridge setup 
iface br0 inet static 
bridge_ports enp1s0 enp2s0 
address 192.168.3.75 
broadcast 192.168.3.255 
netmask 255.255.255.0 
gateway 192.168.3

note: I still have to run this command later, will troubleshoot at some point 
(unless you have suggestions to make it work):

tc qdisc add dev enp1s0 root fq_codel

To start, my pings from Machine A to Laptop C were around 0.75 msec, then I 
flooded the link from Machine A to Laptop C using:

dd if=/dev/urandom | ssh user@192.168.10.150 dd of=/dev/null

Then my pings went up to around 170 msec. Once I enabled fq_codel on the bridge 
machine B, my pings dropped to around 10 msec.

Hope this helps someone else working on a similar setup.

- Dev

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] fq_codel on bridge multiple subnets?

2019-01-04 Thread Dave Taht
Well, good nics are best. :) One reason why the apu2 series is so
popular is because it uses the extremely good intel chipset.

On Fri, Jan 4, 2019 at 10:20 AM Dev  wrote:
>
> I want to pass multiple hundreds of Mbps across this bridge very 
> consistently, and across multiple subnets to different enterprise gateways 
> which then connect to the internet, will plan a little test to see how this 
> does under load. Hopefully I don’t need special NIC’s to handle it?
>
> > On Jan 4, 2019, at 1:19 AM, Pete Heist  wrote:
> >
> > It’s a little different for me in that I’m rate limiting on one of the 
> > physical interfaces, but otherwise, your setup should reduce latency under 
> > load when the Ethernet devices are being used at line rate.
> >
> > If your WAN interface is enp8s0 and goes out to the Internet, you may want 
> > to shape there (htb+fq_codel or cake) depending on what upstream device is 
> > in use.
> >
> > If enp7s6 and enp9s2 are only carrying LAN traffic, and not traffic that 
> > goes out to the Internet, fq_codel’s target and interval could be reduced.
> >
> >> On Jan 4, 2019, at 6:22 AM, Dev  wrote:
> >>
> >> Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 
> >> with what Debian Buster actually calls them):
> >>
> >> auto lo br0
> >> iface lo inet loopback
> >>
> >> allow-hotplug enp8s0
> >> iface enp8s0 inet static
> >>  address 192.168.10.200
> >>  netmask 255.255.255.0
> >>  gateway 192.168.10.1
> >>  dns-nameservers 8.8.8.8
> >>
> >> iface enp7s6 inet manual
> >>  tc qdisc add dev enp7s6 root fq_codel
> >>
> >> iface enp9s2 inet manual
> >>  tc qdisc add dev enp9s2 root fq_codel
> >>
> >> # Bridge setup
> >> iface br0 inet static
> >>  bridge_ports enp7s6 enp9s2
> >>  #bridge_stp on
> >>  address 192.168.3.50
> >>  broadcast 192.168.3.255
> >>  netmask 255.255.255.0
> >>  gateway 192.168.3.1
> >>  dns-nameservers 8.8.8.8
> >>
> >> so my bridge interfaces now show:
> >>
> >>> : tc -s qdisc show dev enp7s6
> >> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 
> >> target 5.0ms interval 100.0ms memory_limit 32Mb ecn
> >> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> >> backlog 0b 0p requeues 0
> >> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
> >> new_flows_len 0 old_flows_len 0
> >>
> >> and
> >>
> >>> : tc -s qdisc show dev enp9s2
> >> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 
> >> target 5.0ms interval 100.0ms memory_limit 32Mb ecn
> >> Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
> >> backlog 0b 0p requeues 0
> >> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
> >> new_flows_len 0 old_flows_len 0
> >>
> >> with my bridge like:
> >>
> >> ip a
> >>
> >> 5: br0:  mtu 1500 qdisc noqueue state UP 
> >> group default qlen 1000
> >>   link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
> >>   inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
> >>  valid_lft forever preferred_lft forever
> >>   inet6 fe80::204:5aff:fe86:a284/64 scope link
> >>  valid_lft forever preferred_lft forever
> >>
> >> So do I have it configured right or should I change something? I haven’t 
> >> gotten a chance to stress test it yet, but will try tomorrow.
> >>
> >> - Dev
> >>
> >>> On Jan 3, 2019, at 10:54 AM, Pete Heist  wrote:
> >>>
> >>>
>  On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen  wrote:
> 
>  Dev  writes:
> 
> > I’m trying to create a bridge on eth1 and eth2, with a management
> > interface on eth0, then enable fq_codel on the bridge. My bridge
> > interface looks like:
> 
>  You'll probably want to put FQ-CoDel on the underlying physical
>  interfaces, as those are the ones actually queueing the traffic...
> >>>
> >>> I can confirm that. I'm currently using a bridge on my home router. eth3 
> >>> and eth4 are bridged, eth4 is connected to the CPE device which goes out 
> >>> to the Internet, eth4 is where queue management is applied, and this 
> >>> works. It does not work to add this to br0…
> >>>
> >>
> >
>
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] fq_codel on bridge multiple subnets?

2019-01-04 Thread Dev
I want to pass multiple hundreds of Mbps across this bridge very consistently, 
and across multiple subnets to different enterprise gateways which then connect 
to the internet, will plan a little test to see how this does under load. 
Hopefully I don’t need special NIC’s to handle it?

> On Jan 4, 2019, at 1:19 AM, Pete Heist  wrote:
> 
> It’s a little different for me in that I’m rate limiting on one of the 
> physical interfaces, but otherwise, your setup should reduce latency under 
> load when the Ethernet devices are being used at line rate.
> 
> If your WAN interface is enp8s0 and goes out to the Internet, you may want to 
> shape there (htb+fq_codel or cake) depending on what upstream device is in 
> use.
> 
> If enp7s6 and enp9s2 are only carrying LAN traffic, and not traffic that goes 
> out to the Internet, fq_codel’s target and interval could be reduced.
> 
>> On Jan 4, 2019, at 6:22 AM, Dev  wrote:
>> 
>> Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 
>> with what Debian Buster actually calls them):
>> 
>> auto lo br0
>> iface lo inet loopback
>> 
>> allow-hotplug enp8s0
>> iface enp8s0 inet static
>>  address 192.168.10.200
>>  netmask 255.255.255.0
>>  gateway 192.168.10.1
>>  dns-nameservers 8.8.8.8
>> 
>> iface enp7s6 inet manual
>>  tc qdisc add dev enp7s6 root fq_codel
>> 
>> iface enp9s2 inet manual
>>  tc qdisc add dev enp9s2 root fq_codel
>> 
>> # Bridge setup
>> iface br0 inet static
>>  bridge_ports enp7s6 enp9s2
>>  #bridge_stp on
>>  address 192.168.3.50
>>  broadcast 192.168.3.255
>>  netmask 255.255.255.0
>>  gateway 192.168.3.1
>>  dns-nameservers 8.8.8.8
>> 
>> so my bridge interfaces now show:
>> 
>>> : tc -s qdisc show dev enp7s6
>> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 
>> 5.0ms interval 100.0ms memory_limit 32Mb ecn
>> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>> backlog 0b 0p requeues 0
>> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>> new_flows_len 0 old_flows_len 0
>> 
>> and 
>> 
>>> : tc -s qdisc show dev enp9s2
>> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 
>> 5.0ms interval 100.0ms memory_limit 32Mb ecn
>> Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
>> backlog 0b 0p requeues 0
>> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>> new_flows_len 0 old_flows_len 0
>> 
>> with my bridge like:
>> 
>> ip a 
>> 
>> 5: br0:  mtu 1500 qdisc noqueue state UP 
>> group default qlen 1000
>>   link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
>>   inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
>>  valid_lft forever preferred_lft forever
>>   inet6 fe80::204:5aff:fe86:a284/64 scope link
>>  valid_lft forever preferred_lft forever
>> 
>> So do I have it configured right or should I change something? I haven’t 
>> gotten a chance to stress test it yet, but will try tomorrow.
>> 
>> - Dev
>> 
>>> On Jan 3, 2019, at 10:54 AM, Pete Heist  wrote:
>>> 
>>> 
 On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen  wrote:
 
 Dev  writes:
 
> I’m trying to create a bridge on eth1 and eth2, with a management
> interface on eth0, then enable fq_codel on the bridge. My bridge
> interface looks like:
 
 You'll probably want to put FQ-CoDel on the underlying physical
 interfaces, as those are the ones actually queueing the traffic...
>>> 
>>> I can confirm that. I'm currently using a bridge on my home router. eth3 
>>> and eth4 are bridged, eth4 is connected to the CPE device which goes out to 
>>> the Internet, eth4 is where queue management is applied, and this works. It 
>>> does not work to add this to br0…
>>> 
>> 
> 

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] Does VDSL interleaving+FEC help bufferbloat?

2019-01-04 Thread Jan Ceuleers
On 04/01/2019 18:10, Dave Taht wrote:
> dsl interleave was added primarily to make multicast udp tv streams
> work better (as they are very intolerant of packet loss). Often (as in
> free's implementation) these streams are "invisible" to the overlying
> IP applications. It typically adds at least 6ms of delay to an already
> slow technology.
> 
> Do measure unloaded latency with and without it on, it's been a long
> time since I fiddled with it

Interleaving isn't a parameter that mere end-users* can influence - it
is part of the line's profile, and that's determined by the network
operator.

*: you know, the people who pay the bills...
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] Does VDSL interleaving+FEC help bufferbloat?

2019-01-04 Thread Mikael Abrahamsson

On Fri, 4 Jan 2019, Dave Taht wrote:

dsl interleave was added primarily to make multicast udp tv streams work 
better (as they are very intolerant of packet loss). Often (as in free's 
implementation) these streams are "invisible" to the overlying IP 
applications. It typically adds at least 6ms of delay to an already slow 
technology.


ADSL2+ is very prone to short bursts of interference, so setting no 
interleaving means quite high packet loss. Setting interleaving to 16ms 
means FEC has a much better chance of correcting errors and thus reduce 
packet loss.


At several jobs ago we actually had several different profiles for 
customers, they could choose 1, 4 or 16ms interlaving depending on their 
needs for gaming etc. The 1 and 4 ms interleaving targets had different 
SNR margin targets so they were sacrificing speed for lower latency, 
because that's the tradeoff you basically have to do with normal L4 
protocols that end customers typically use.


--
Mikael Abrahamssonemail: swm...@swm.pp.se
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] Does VDSL interleaving+FEC help bufferbloat?

2019-01-04 Thread Dave Taht
dsl interleave was added primarily to make multicast udp tv streams
work better (as they are very intolerant of packet loss). Often (as in
free's implementation) these streams are "invisible" to the overlying
IP applications. It typically adds at least 6ms of delay to an already
slow technology.

Do measure unloaded latency with and without it on, it's been a long
time since I fiddled with it
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] Does VDSL interleaving+FEC help bufferbloat?

2019-01-04 Thread Simon Barber
Increasing fixed latency can help avoid TCP hitting RTO.

Simon


> On Jan 3, 2019, at 8:01 PM, Jonathan Morton  wrote:
> 

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] fq_codel on bridge multiple subnets?

2019-01-04 Thread Pete Heist
It’s a little different for me in that I’m rate limiting on one of the physical 
interfaces, but otherwise, your setup should reduce latency under load when the 
Ethernet devices are being used at line rate.

If your WAN interface is enp8s0 and goes out to the Internet, you may want to 
shape there (htb+fq_codel or cake) depending on what upstream device is in use.

If enp7s6 and enp9s2 are only carrying LAN traffic, and not traffic that goes 
out to the Internet, fq_codel’s target and interval could be reduced.

> On Jan 4, 2019, at 6:22 AM, Dev  wrote:
> 
> Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 
> with what Debian Buster actually calls them):
> 
> auto lo br0
> iface lo inet loopback
> 
> allow-hotplug enp8s0
> iface enp8s0 inet static
>   address 192.168.10.200
>   netmask 255.255.255.0
>   gateway 192.168.10.1
>   dns-nameservers 8.8.8.8
> 
> iface enp7s6 inet manual
>   tc qdisc add dev enp7s6 root fq_codel
> 
> iface enp9s2 inet manual
>   tc qdisc add dev enp9s2 root fq_codel
> 
> # Bridge setup
> iface br0 inet static
>   bridge_ports enp7s6 enp9s2
>   #bridge_stp on
>   address 192.168.3.50
>   broadcast 192.168.3.255
>   netmask 255.255.255.0
>   gateway 192.168.3.1
>   dns-nameservers 8.8.8.8
> 
> so my bridge interfaces now show:
> 
>> : tc -s qdisc show dev enp7s6
> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 
> 5.0ms interval 100.0ms memory_limit 32Mb ecn
> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
>  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>  new_flows_len 0 old_flows_len 0
> 
> and 
> 
>> : tc -s qdisc show dev enp9s2
> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 
> 5.0ms interval 100.0ms memory_limit 32Mb ecn
> Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
>  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>  new_flows_len 0 old_flows_len 0
> 
> with my bridge like:
> 
> ip a 
> 
> 5: br0:  mtu 1500 qdisc noqueue state UP 
> group default qlen 1000
>link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
>inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
>   valid_lft forever preferred_lft forever
>inet6 fe80::204:5aff:fe86:a284/64 scope link
>   valid_lft forever preferred_lft forever
> 
> So do I have it configured right or should I change something? I haven’t 
> gotten a chance to stress test it yet, but will try tomorrow.
> 
> - Dev
> 
>> On Jan 3, 2019, at 10:54 AM, Pete Heist  wrote:
>> 
>> 
>>> On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen  wrote:
>>> 
>>> Dev  writes:
>>> 
 I’m trying to create a bridge on eth1 and eth2, with a management
 interface on eth0, then enable fq_codel on the bridge. My bridge
 interface looks like:
>>> 
>>> You'll probably want to put FQ-CoDel on the underlying physical
>>> interfaces, as those are the ones actually queueing the traffic...
>> 
>> I can confirm that. I'm currently using a bridge on my home router. eth3 and 
>> eth4 are bridged, eth4 is connected to the CPE device which goes out to the 
>> Internet, eth4 is where queue management is applied, and this works. It does 
>> not work to add this to br0…
>> 
> 

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat