On Thu, Mar 10, 2011 at 12:15 AM, Matthew Anderson
<matth...@ihostsolutions.com.au> wrote:
> I have a feeling it's to do with ZFS's recordsize property but haven't been 
> able to find any solid testing done with NTFS. I'm going to do some testing 
> using smaller record sizes tonight to see if that helps the issue.
> At the moment I'm surviving on cache and am quickly running out of capacity.
>
> Can anyone suggest any further tests or have any idea about what's going on?

The default blocksize for a zfs volume is 8k, so 4k writes will
probably require a read as well. You can try creating a new volume
with volblocksize set to 4k and see if that helps. The value can't be
changed once set, so you'll have to make a new dataset.

Make sure the "wcd" property is set to "false" for the volume in
stmfadm in order to enable the write cache. It shouldn't make a huge
difference with the zil disabled, but it certainly won't hurt.

-B

-- 
Brandon High : bh...@freaks.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to