Heya Anton,

On 10/17/06, Anton B. Rang <[EMAIL PROTECTED]> wrote:
No, the reason to try to match recordsize to the write size is so that a small 
write does not turn into a large read + a large write.  In configurations where 
the disk is kept busy, multiplying 8K of data transfer up to 256K hurts.
Ah. I knew i was missing something. What COW giveth, COW taketh away...

This is really orthogonal to the cache — in fact, if we had a switch to disable 
caching, this problem would get worse instead of better (since we wouldn't 
amortize the initial large read over multiple small writes).
Agreed.

It looks to me there are only 2 ways to solve this:

1) Set recordsize manually
2) Allow the blocksize of a file be changed even if there are multiple
blocks in the file.
--
Regards,
Jeremy
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to