On Feb 26, 2004, at 4:57 PM, Michael Conlen wrote:
[ ... ]
The production system will use dual channel U320 RAID controllers with 12 disks per channel, so disk shouldn't be an issue, and it will connect with GigE, so network is plenty fine, now I'm on to CPU.

Sounds like you've gotten nice hardware. Four or so years ago, I built out a roughly comparible fileserver [modulo the progess in technology since then] on a Sun E450, which housed 10 SCA-form-factor disks over 5 UW SCSI channels (using 64-bit PCI and backplane, though), and could have held a total of 20 disks if I'd filled it. I mention this because...


Low volume tests with live data indicate low CPU usage however when I best fit the graph it's dificult to tell how linear (or non linear) the data is. [ ... ] Does that kind of curve look accurate to you (anyone)?

...even under stress testing on the faster four-disk RAID-10 volume using SEAGATE-ST336752LC drives (15K RPM, 8MB cache), each on a seperate channel, with ~35 client machines bashing away, the fileserver would bottleneck on disk I/O without more than maybe 10% or 15% CPU load, and that was using a 400MHz CPU.


The notion that an NFS fileserver is going to end up CPU-bound simply doesn't match my experience or my expectations. If you have single-threaded sequential I/O patterns (like running dd, or maybe a database), you'll bottleneck on the interface or maximum disk throughput, otherwise even with ~3.5 ms seek times, multi-threaded I/O from a buncha clients will require the disk heads to move around so much that you bottleneck at a certain number of I/O operations per second per disk, rather than a given bandwidth per disk.

--
-Chuck

_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to