> It would also be worthwhile doing something like the
> following to
> determine the max throughput the H/W RAID is giving
> you:
> # time dd of=<raw disk> if=/dev/zero bs=1048576
>  count=1000
> or a 2Gbps 6140 with 300GB/10K drives, we get ~46MB/s
> on a
> single-drive RAID-0 array, ~83MB/s on a 4-disk RAID-0
> array w/128k
> stripe, and ~69MB/s on a seven-disk RAID-5 array
> w/128k strip.
> 
> -- 
> albert chin ([EMAIL PROTECTED])
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> 

Well the Solaris kernel is telling me that it doesn't understand 
zfs_nocacheflush, but the array sure is acting like it!
I ran the dd example, but increased the count for a longer running time.

5-disk RAID5 with UFS: ~79 MB/s
5-disk RAID5 with ZFS: ~470 MB/s

I'm assuming there's some caching going on with ZFS that's really helping out?

Also, no Santricity, just Sun's Common Array Manager. Is it possible to use 
both without completely confusing the array?
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to