Chris, don't forget to mention that they are simplifying the buffer cache (and 
bigmem!) so that when the attempted switch to rthreads comes, there will be far 
less hassles 
compared to FreeBSD or NetBSD, which literally took 2-5 years to perfect. Read 
Matt Dillon's interview linked from wikipedia. Read the section on buffer cache

http://kerneltrap.org/node/8

Linux and the other BSD's with so much commercial support (not Dfly!) just 
recently getting rid of Big Giant Lock, so OpenBSD is not that far behind. 
Stick with OpenBSD and see how 'fast' it continues to run.

Good luck.

On Mon, 18 Apr 2011, Chris Cappuccio wrote:

> Rodrigo Mosconi [open...@mosconi.mat.br] wrote:
> > Hi all,
> > 
> > I'm interested on some benchmarks, specially with network/PF.
> > 
> 
> How about this...With GENERIC -current amd64 kernel, I'm getting almost 
> 800Mbps on a single FTP transfer between two 1Gbit-connected boxes with em 
> controllers and mfi RAID backed with 6xSATA on each box.  This is with boxes 
> that are already busy with day-to-day activity.  The limitation has gone from 
> the networking code to the mfi controller and associated disk activity, nice 
> to see I think.
> 
> Removing NIC driver interrupt loops and IPL_BIO in ppb was a "big win".....
> 
> Transfers are a lot slower with my mpi two disk RAID 1 boxes, but using less 
> hard disks is a lot slower than 1Gbps ethernet.  Need to try with mfs next.
> 
> It "pays" to do it right, MCLGETI without loops in x_intr is proving to be a 
> much better idea than what FreeBSD did with the polling hacks.
> 
> I wonder what kind of packet per second limitations people see now with bge, 
> em, bnx, ix, vr, the common drivers, with and without pf enabled.  PF enabled 
> should be faster now that it doesn't recalculate IP checksums mid-stream !
> 
> -- 
> the preceding comment is my own and in no way reflects the opinion of the 
> Joint Chiefs of Staff

Reply via email to