Hi, there is no bug, the 'pipe profile' code is working correctly. In your mail below you are comparing two different things.
"pipe config bw 10Mbit/s delay 25ms" means that _after shaping_ at 10Mbps, all traffic will be subject to an additional delay of 25ms. Each packet (1470 bytes) will take Length/Bandwidth sec to come out or 1470*8/10M = 1.176ms , but you won't see them until you wait another 25ms (7500km at the speed of light). "pipe config bw 10Mbit/s profile "test" ..." means that in addition to the Length/Bandwidth, _each packet transmission_ will consume some additional air-time as specified in the profile (25ms in your case) So, in your case with 1470bytes/pkt each transmission will take len/bw (1.176ms) + 25ms (extra air time) = 26.76ms That is 25 times more than the previous case and explains the reduced bandwidth you see. The 'delay profile' is effectively extra air time used for each transmission. The name is probably confusing, i should have called it 'extra-time' or 'overhead' and not 'delay' cheers luigi On Tue, Nov 24, 2009 at 12:40:31PM -0500, Charles Henri de Boysson wrote: > Hi, > > I have a simple setup with two computer connected via a FreeBSD bridge > running 8.0 RELEASE. > I am trying to use dummynet to simulate a wireless network between the > two and for that I wanted to use the pipe profile feature of FreeBSD > 8.0. > But as I was experimenting with the pipe profile feature I ran into some > issues. > > I have setup ipfw to send traffic coming for either interface of the > bridge to a respective pipe as follow: > > # ipfw show > 00100 ?? ?? 0 ?? ?? ?? ?? 0 allow ip from any to any via lo0 > 00200 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to 127.0.0.0/8 > 00300 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from 127.0.0.0/8 to any > 01000 ?? ?? 0 ?? ?? ?? ?? 0 pipe 1 ip from any to any via vr0 layer2 > 01100 ?? ?? 0 ?? ?? ?? ?? 0 pipe 101 ip from any to any via vr4 layer2 > 65000 ??7089 ?? ??716987 allow ip from any to any > 65535 ?? ?? 0 ?? ?? ?? ?? 0 deny ip from any to any > > When I setup my pipes as follow: > > # ipfw pipe 1 config bw 10Mbit delay 25 mask proto 0 > # ipfw pipe 101 config bw 10Mbit delay 25 mask proto 0 > # ipfw pipe show > > 00001: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > 00101: ??10.000 Mbit/s ?? 25 ms ?? 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > > With this setup, when I try to pass traffic through the bridge with > iperf, I obtain the desired speed: iperf reports about 9.7Mbits/sec in > UDP mode and 9.5 in TCP mode (I copied and pasted the iperf runs at > the end of this email). > > The problem arise when I setup pipe 1 (the downlink) with an > equivalent profile (I tried to simplify it as much as possible). > > # ipfw pipe 1 config profile test.pipeconf mask proto 0 > # ipfw pipe show > 00001: 10.000 Mbit/s 0 ms 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > profile: name "test" loss 1.000000 samples 2 > 00101: 10.000 Mbit/s 25 ms 50 sl. 0 queues (1 buckets) droptail > burst: 0 Byte > > # cat test.pipeconf > name test > bw 10Mbit > loss-level 1.0 > samples 2 > prob delay > 0.0 25 > 1.0 25 > > The same iperf TCP tests then collapse to about 500Kbit/s with the > same settings (copy and pasted the output of iperf bellow) > > I can't figure out what is going on. There is no visible load on the bridge. > I have an unmodified GENERIC kernel with the following sysctl. > > net.link.bridge.ipfw: 1 > kern.hz: 1000 > > The bridge configuration is as follow: > > bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500 > ether 1a:1f:2e:42:74:8d > id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15 > maxage 20 holdcnt 6 proto rstp maxaddr 100 timeout 1200 > root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0 > member: vr4 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP> > ?? ?? ?? ??ifmaxaddr 0 port 6 priority 128 path cost 200000 > member: vr0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP> > ?? ?? ?? ??ifmaxaddr 0 port 2 priority 128 path cost 200000 > > > iperf runs without the profile set: > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > ------------------------------------------------------------ > Client connecting to 10.0.0.254, TCP port 5001 > Binding to local address 10.1.0.1 > TCP window size: 16.0 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.0 sec 17.0 MBytes 9.49 Mbits/sec > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > ------------------------------------------------------------ > Client connecting to 10.0.0.254, UDP port 5001 > Binding to local address 10.1.0.1 > Sending 1470 byte datagrams > UDP buffer size: 110 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > [ 3] Sent 13382 datagrams > [ 3] Server Report: > [ 3] 0.0-15.1 sec 17.4 MBytes 9.72 Mbits/sec 0.822 ms 934/13381 (7%) > [ 3] 0.0-15.1 sec 1 datagrams received out-of-order > > > iperf runs with the profile set: > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 > ------------------------------------------------------------ > Client connecting to 10.0.0.254, TCP port 5001 > Binding to local address 10.1.0.1 > TCP window size: 16.0 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.7 sec 968 KBytes 505 Kbits/sec > > % iperf -B 10.1.0.1 -c 10.0.0.254 -t 15 -u -b 10Mbit > ------------------------------------------------------------ > Client connecting to 10.0.0.254, UDP port 5001 > Binding to local address 10.1.0.1 > Sending 1470 byte datagrams > UDP buffer size: 110 KByte (default) > ------------------------------------------------------------ > [ 3] local 10.1.0.1 port 5001 connected with 10.0.0.254 port 5001 > [ ID] Interval Transfer Bandwidth > [ 3] 0.0-15.0 sec 18.8 MBytes 10.5 Mbits/sec > [ 3] Sent 13382 datagrams > [ 3] Server Report: > [ 3] 0.0-16.3 sec 893 KBytes 449 Kbits/sec 1.810 ms 12757/13379 (95%) > > > Let me know what other information you would need to help me debugging this. > In advance, thank you for your help > > -- > Charles-Henri de Boysson > _______________________________________________ > freebsd-ipfw@freebsd.org mailing list > http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw > To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org" _______________________________________________ freebsd-ipfw@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-ipfw To unsubscribe, send any mail to "freebsd-ipfw-unsubscr...@freebsd.org"