On Wed, 24 Feb 2010, Steve wrote:

What has happened is that reading and writing large files which are unrelated to these ones has become appallingly slow... So I was wondering if just the presence of so many files was in some way putting alot of stress on the pool, even if these files arent used very often...

If these millions of files was built up over a long period of time while large files are also being created, then they may contribute to an increased level of filesystem fragmentation.

With millions of such tiny files, it makes sense to put the small files in a separate zfs filesystem which has its recordsize property set to a size not much larger than the size of the files. This should reduce waste, resulting in reduced potential for fragmentation in the rest of the pool.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to