Am 14.12.10 07:43, schrieb Stephan Budach:
Am 14.12.2010 um 03:30 schrieb Bob Friesenhahn<bfrie...@simple.dallas.tx.us>:

On Mon, 13 Dec 2010, Stephan Budach wrote:
My current run of bonnie is of course not that satisfactory and I wanted to ask 
you, if it's safe to turn on at least the drive level options, namely the write 
cache and the read ahead?
Enabling the write cache is fine as long as it is non-volatile or is flushed to 
disk when zfs requests it.  Zfs will request a transaction-group flush on all 
disks before proceeding with the next batch of writes.  The read ahead might 
not be all that valuable in practice (and might cause a severe penalty) because 
it assumes a particular mode and timing of access which might not match how 
your system is actually used.  Most usage scenarios are something other than 
what bonnie++ does.
I know that bonnie++ does not generate the workload I will see on my server, 
but it reliably causes ZFS to kick out drives from the pool, which shouldn't 
happen, of course.

Actually, I am expecting the Qsan controller fw, which is what is build into 
these raids, has some issues, when it has to deal with high random I/O.

I will try now my good old Infortrend systems and See, if I can reproduce this 
issue with them as well.
I just wanted to wrap this up. So, actually the current firmware 1.0.8x for the CiDesign iR16FC4ER has a severe bug which caused ZFS to kick out random disks and to degrade the zpool. So, I tried the older firmware 1.07 which doesn't has these issues and where the 2x16 JBODs are running very well. Since this is a FC-to-SATA2 raid I also had to tune the throttle parameter in the qlc.conf which led to a great performance boost - either 1 and 2 did a great job.

Now, that this is solved, I can go ahead and transfer my data from my 2xRAID6 zpool onto these new devices.

Cheers,
budy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to