I want to pass multiple hundreds of Mbps across this bridge very consistently, 
and across multiple subnets to different enterprise gateways which then connect 
to the internet, will plan a little test to see how this does under load. 
Hopefully I don’t need special NIC’s to handle it?

> On Jan 4, 2019, at 1:19 AM, Pete Heist <p...@heistp.net> wrote:
> 
> It’s a little different for me in that I’m rate limiting on one of the 
> physical interfaces, but otherwise, your setup should reduce latency under 
> load when the Ethernet devices are being used at line rate.
> 
> If your WAN interface is enp8s0 and goes out to the Internet, you may want to 
> shape there (htb+fq_codel or cake) depending on what upstream device is in 
> use.
> 
> If enp7s6 and enp9s2 are only carrying LAN traffic, and not traffic that goes 
> out to the Internet, fq_codel’s target and interval could be reduced.
> 
>> On Jan 4, 2019, at 6:22 AM, Dev <d...@logicalwebhost.com> wrote:
>> 
>> Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 
>> with what Debian Buster actually calls them):
>> 
>> auto lo br0
>> iface lo inet loopback
>> 
>> allow-hotplug enp8s0
>> iface enp8s0 inet static
>>      address 192.168.10.200
>>      netmask 255.255.255.0
>>      gateway 192.168.10.1
>>      dns-nameservers 8.8.8.8
>> 
>> iface enp7s6 inet manual
>>      tc qdisc add dev enp7s6 root fq_codel
>> 
>> iface enp9s2 inet manual
>>      tc qdisc add dev enp9s2 root fq_codel
>> 
>> # Bridge setup
>> iface br0 inet static
>>      bridge_ports enp7s6 enp9s2
>>      #bridge_stp on
>>              address 192.168.3.50
>>              broadcast 192.168.3.255
>>              netmask 255.255.255.0
>>              gateway 192.168.3.1
>>              dns-nameservers 8.8.8.8
>> 
>> so my bridge interfaces now show:
>> 
>>> : tc -s qdisc show dev enp7s6
>> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 
>> 5.0ms interval 100.0ms memory_limit 32Mb ecn
>> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>> backlog 0b 0p requeues 0
>> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>> new_flows_len 0 old_flows_len 0
>> 
>> and 
>> 
>>> : tc -s qdisc show dev enp9s2
>> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 
>> 5.0ms interval 100.0ms memory_limit 32Mb ecn
>> Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
>> backlog 0b 0p requeues 0
>> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>> new_flows_len 0 old_flows_len 0
>> 
>> with my bridge like:
>> 
>> ip a 
>> 
>> 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
>> group default qlen 1000
>>   link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
>>   inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
>>      valid_lft forever preferred_lft forever
>>   inet6 fe80::204:5aff:fe86:a284/64 scope link
>>      valid_lft forever preferred_lft forever
>> 
>> So do I have it configured right or should I change something? I haven’t 
>> gotten a chance to stress test it yet, but will try tomorrow.
>> 
>> - Dev
>> 
>>> On Jan 3, 2019, at 10:54 AM, Pete Heist <p...@heistp.net> wrote:
>>> 
>>> 
>>>> On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen <t...@toke.dk> wrote:
>>>> 
>>>> Dev <d...@logicalwebhost.com> writes:
>>>> 
>>>>> I’m trying to create a bridge on eth1 and eth2, with a management
>>>>> interface on eth0, then enable fq_codel on the bridge. My bridge
>>>>> interface looks like:
>>>> 
>>>> You'll probably want to put FQ-CoDel on the underlying physical
>>>> interfaces, as those are the ones actually queueing the traffic...
>>> 
>>> I can confirm that. I'm currently using a bridge on my home router. eth3 
>>> and eth4 are bridged, eth4 is connected to the CPE device which goes out to 
>>> the Internet, eth4 is where queue management is applied, and this works. It 
>>> does not work to add this to br0…
>>> 
>> 
> 

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to