I'm about to do some testing with that dtrace script..

However, in the meantime - I've disabled primarycache (set primarycache=none) 
since I noticed that it was easily caching /dev/zero and I wanted to do some 
tests within the OS rather than over FC.

I am getting the same results through dd.
Virtually the exact same numbers.
I imagine this particular fact is a testament to COMSTAR - of course I suspect 
if I ever get the disks pushing what they're cable of - then maybe I will 
notice some slight COMSTAR inefficiencies later on...  for now there don't seem 
to be any at this performance level.

Anyway - there seems to be a 523MBps (or so) overall throughput limit.  If two 
pools are writing, the aggregate total zpool throughput for all pools will not 
exceed about 523MBps.

That's of course not the biggest issue.
With the ARC cache disabled - some strange numbers are becoming apparent:
dd throughput hovers about 70MBps for reads, 800MBps for writes.
Meanwhile - zpool throughput shows:
 50-150MBps throughput for reads / 520MBps for writes.

If I set zfs_prefetch_disable, then zpool throuhgput for reads matches userland 
throughput - but stays in the 70-90MBps range.

I am starting to think that there is a ZFS write ordering issue (which becomes 
apparent when you subsequently read the data) or zfs prefetch is completely 
off-key and unable to properly read ahead in order to saturate the read 
pipeline...

What do you all think?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to