I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver 
from a drive, and doing this:
dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) 
of=/dev/null gives me only 35MB/s!?. I am getting basically the same result 
whether it is single zfs drive, mirror or a stripe (I am testing with two 
Seagate 7200.10 320G drives hanging off the same interface card).

On the test machine I also have an old disk with UFS on PATA interface (Seagate 
7200.7 120G). dd from raw disk gives 58MB/s and dd from file on UFS gives 
45MB/s - far less relative slowdown compared to raw disk.

This is just an AthlonXP 2500+ with 32bit PCI SATA sil3114 card, but 
nonetheless, the hardware has the bandwidth to fully saturate the hard drive, 
as seen by dd from the raw disk device. What is going on? Am I doing something 
wrong or is ZFS just not designed to be used on humble hardware?

My goal is to have it go fast enough to saturate gigabit ethernet - around 
75MB/s. I don't plan on replacing hardware - after all, Linux with RAID10 gives 
me this already. I was hoping to switch to Solaris/ZFS to get checksums (which 
wouldn't seem to account for slowness, because CPU stays under 25% during all 
this).

I can temporarily scrape together an x64 machine with ICH7 SATA interface - 
I'll try the same test with same drives on that to elliminate 32-bitness and 
PCI slowness from the equation. And while someone will say dd has little to do 
with real-life file server performance - it actually has a lot to do with it, 
because most of use of this server is to copy multi-gigabyte files to and fro a 
few times per day. Hardly any random access involved (fragmentation aside).
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to