On Tue, Jan 5, 2010 at 2:32 PM, Henning Brauer <lists-open...@bsws.de> wrote:
> I really like the 275 -> 420MBit/s change for 4.6 -> current with pf.
>

Update: both machines run -current again this time. I think my initial
tcpbench results were poor because of running cbq queuing on 4.6. The
server has em NIC , the client has msk. Jumbo frames are set to 9000
on both, but don't make much difference. This is with a $20 D-link
switch.

tcpbench results:

pf disabled on both machines: 883 Mb/s

pf enabled on tcpbench server only - simple ruleset like the documentation
example: 619 Mb/s

pf enabled on both machines - the tcpbench client box has the standard
-current default install pf.conf: 585 Mb/s

pf enabled on just the tcpbench server: with cbq queuing enabled on
the internal interface as follows (for tcpbench only, not for real
network use) - no other queues defined on $int_if:

  altq on $int_if cbq bandwidth 1Gb queue { std_in, ssh_im_in, dns_in  }
  queue std_in    bandwidth 999.9Mb cbq(default,borrow)

401 Mb/s

Why is that? cbq code overhead? The machine doesn't have enough CPU?
Or am I missing something? Admittedly it's an old P4.

After a while, during benching, even if pf is disabled on both
machines the throughput drops to 587 Mbit/s. The only way to bring it
back up to 883 Mb/s is to reboot the tcpbench client. Anyone know why?

Thanks!

Reply via email to