On Qui, 2008-02-07 at 22:51 -0700, Neil Perrin wrote:

> I believe when a prototype using 1K dnodes was tested it showed an
> unacceptable (30%?) hit on some benchmarks. So if can possibly
> avoid increasing the dnode size (by default) then we should do so.


Hmm, interesting..

Do you know the reason for such a performance hit?

Even with 1K dnode sizes, if the dnodes didn't have any extended
attributes and since metadata compression is enabled, the on-disk size
of metadnode blocks should remain approximately the same, right?

Could it be because the metadnode object became twice the size (logical
size) and therefore required another level of indirect blocks which, as
a consequence, required an additional disk seek for each metadnode block
read?

It would be interesting to run some benchmarks with Kalpak's large dnode
patch.

Cheers,
Ricardo

--

Ricardo Manuel Correia
Lustre Engineering

Sun Microsystems, Inc.
Portugal
Phone +351.214134023 / x58723
Mobile +351.912590825
Email Ricardo.M.Correia at Sun.COM
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<http://mail.opensolaris.org/pipermail/zfs-code/attachments/20080208/bdc7cf22/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 6g_top.gif
Type: image/gif
Size: 1257 bytes
Desc: not available
URL: 
<http://mail.opensolaris.org/pipermail/zfs-code/attachments/20080208/bdc7cf22/attachment.gif>

Reply via email to