On Apr 18, 2007, at 6:44 AM, Yaniv Aknin wrote:

Hello,

I'd like to plan a storage solution for a system currently in production.

The system's storage is based on code which writes many files to the file system, with overall storage needs currently around 40TB and expected to reach hundreds of TBs. The average file size of the system is ~100K, which translates to ~500 million files today, and billions of files in the future. This storage is accessed over NFS by a rack of 40 Linux blades, and is mostly read-only (99% of the activity is reads). While I realize calling this sub-optimal system design is probably an understatement, the design of the system is beyond my control and isn't likely to change in the near future.

The system's current storage is based on 4 VxFS filesystems, created on SVM meta-devices each ~10TB in size. A 2-node Sun Cluster serves the filesystems, 2 filesystems per node. Each of the filesystems undergoes growfs as more storage is made available. We're looking for an alternative solution, in an attempt to improve performance and ability to recover from disasters (fsck on 2^42 files isn't practical, and I'm getting pretty worried due to this fact - even the smallest filesystem inconsistency will leave me lots of useless bits).

Question is - does anyone here have experience with large ZFS filesystems with many small-files? Is it practical to base such a solution on a few (8) zpools, each with single large filesystem in it?

hey Yaniv,

Why not 1 pool? That's what we usually recommend (you can have 8 filesystems on top of the 1 pool if you need to).

eric

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to