I'm puzzled by 2 things.

Naively I'd think a write_cache  should not help throughput
test since the cache should fill  up after which you should still be
throttled by the physical drain rate. You clearly show that
it helps; Anyone knows why/how a cache helps throughput ?

And the second thing...quick search, this seems relevant

        Bug ID: 6397876
        Synopsis: sata drives need default write cache controlled via property
        Integrated in Build: snv_38
        http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6397876

May have missed U2 though. Sorry about that...

-r


Philip Brown writes:
 > I previously wrote about my scepticism on the claims that zfs selectively 
 > enables and disables write cache, to improve throughput over the usual 
 > solaris defaults prior to this point.
 > 
 > I posted my observations that this did not seem to be happening in any 
 > meaningful way, for my zfs, on build nv33.
 > 
 > I was told, "oh you just need the more modern drivers".
 > 
 > Well, I'm now running S10u2, with
 > SUNWzfsr  11.10.0,REV=2006.05.18.01.46
 > 
 > I dont see much of a difference.
 > By default, iostat shows the disks grinding along at 10MB/sec during the 
 > transfer.
 > However, if I manually enable write_cache on the drives (SATA drives, FWIW), 
 > the drive throughput zips up to 30MB/sec during the transfer.
 > 
 > 
 > Test case:
 > 
 > # zpool status philpool
 >    pool: philpool
 >   state: ONLINE
 >   scrub: none requested
 > config:
 > 
 >          NAME        STATE     READ WRITE CKSUM
 >          philpool    ONLINE       0     0     0
 >            c5t1d0    ONLINE       0     0     0
 >            c5t4d0    ONLINE       0     0     0
 >            c5t5d0    ONLINE       0     0     0
 > 
 > # dd if=/dev/zero of=/philpool/testfile bs=256k count=10000
 > 
 > # [run iostat]
 > 
 > The wall clock time for the i/o to quiesce, is as espected. Without write 
 > cache manually enabled, it takes 3 times as long to finish, as with it 
 > enabled.  (1:30, vs 30sec)
 > 
 > [Approximately a 2 gig file is generated. A side note of interest to me is 
 > that in both cases, the dd returns to the user relatively quickly, but the 
 > write goes on for quite a long time in the background.. without apparently 
 > reserving 2 gigabytes of extra kernel memory according to swap -s ]
 > 
 > 
 > 
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to