On 24/02/2010 21:31, Bob Friesenhahn wrote:
On Wed, 24 Feb 2010, Steve wrote:

What has happened is that reading and writing large files which are unrelated to these ones has become appallingly slow... So I was wondering if just the presence of so many files was in some way putting alot of stress on the pool, even if these files arent used very often...

If these millions of files was built up over a long period of time while large files are also being created, then they may contribute to an increased level of filesystem fragmentation.

With millions of such tiny files, it makes sense to put the small files in a separate zfs filesystem which has its recordsize property set to a size not much larger than the size of the files. This should reduce waste, resulting in reduced potential for fragmentation in the rest of the pool.

except for one bug which has been fixed which had to do with consuming lots of CPU to find a free block I don't think you are right. You don't have to set recordsize to smaller value for small files. Recordsize property sets a maximum allowed recordsize but other than that it is being selected automatically when file is being created so for small files their recordsize will be small even if it is set to default 128KB.


--
Robert Milkowski
http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to