Hmm... that's a pain if updating the parent also means updating the parent's 
checksum too.  I guess the functionality is there for moving bad blocks, but 
since that's likely to be a rare occurence, it wasn't something that would need 
to be particularly efficient.

With regards sharing the disk resources with other programs, obviously it's 
down to the individual admins how they would configure this, but I would 
suggest that if you have a database with heavy enough requirements to be 
suffering noticable read performance issues due to fragmentation, then that 
database really should have it's own dedicated drives and shouldn't be 
competing with other programs.

I'm not saying defrag is bad (it may be the better solution here), just that if 
you're looking at performance in this kind of depth, you're probably 
experienced enough to have created the database in a contiguous chunk in the 
first place :-)

I do agree that doing these writes now sounds like a lot of work.  I'm guessing 
that needing two full-path updates to achieve this means you're talking about a 
much greater write penalty.  And that means you can probably expect significant 
read penalty if you have any significant volume of writes at all, which would 
rather defeat the point.  After all, if you have a low enough amount of writes 
to not suffer from this penalty, your database isn't going to be particularly 
fragmented.

However, I'm now in over my depth.  This needs somebody who knows the internal 
architecture of ZFS to decide whether it's feasible or desirable, and whether 
defrag is a good enough workaround.

It may be that ZFS is not a good fit for this kind of use, and that if you're 
really concerned about this kind of performance you should be looking at other 
file systems.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to