Robert Milkowski writes:
 > Hello Selim,
 > 
 > Wednesday, March 28, 2007, 5:45:42 AM, you wrote:
 > 
 > SD> talking of which,
 > SD> what's the effort and consequences to increase the max allowed block
 > SD> size in zfs to highr figures like 1M...
 > 
 > Think what would happen then if you try to read 100KB of data - due to
 > chekcsumming ZFS would have to read entire MB.
 > 
 > However it should be possible to batch several IOs together and issue
 > one larger with ZFS - at least I hope it's possible.
 > 

As you note The max coherency unit (blocksize) in ZFS is
128K. It's also the max I/O size. And smaller I/O are
already aggregated or batched up to that size.

At 128K size the control to data ratio on the wire is already
quite reasonable. So I don't see much benefit to increasing this
(there maybe some but the context needs to be well defined).

The issue subject to debate because traditionally, one I/O
came with an implied overhead of a full head seek. In that
case, the larger the I/O the better. So at 60MB/s throughput 
and 5ms head seek time, we need I/Os 300K to make  the data
transfer time larger than the seek time and ~ 3MB I/O
sizes to reach the point of diminishing return.

But with a  write allocate scheme   we are not hit with  the
head seek for every I/O and common I/O size wisdom needs to
be reconsidered.

-r


 > 
 > -- 
 > Best regards,
 >  Robert                            mailto:[EMAIL PROTECTED]
 >                                        http://milek.blogspot.com
 > 
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to