On Wed, 19 May 2010, Deon Cui wrote:

http://constantin.glez.de/blog/2010/04/ten-ways-easily-improve-oracle-solaris-zfs-filesystem-performance

It recommends that for every TB of storage you have you want 1GB of RAM just for the metadata.

Interesting conclusion.

Is this really the case that ZFS metadata consumes so much RAM?
I'm currently building a storage server which will eventually hold up to 20TB of storage, I can't fit in 20GB of RAM on the motherboard!

Unless you do something like enable dedup (which is still risky to use), then there is no rule of thumb that I know of. ZFS will take advantage of available RAM. You should have at least 1GB of RAM available for ZFS to use. Beyond that, it depends entirely on the size of your expected working set. The size of accessed files, the randomness of the access, the number of simultaneous accesses, and the maximum number of files per directory all make a difference to how much RAM you should have for good performance. If you have 200TB of stored data, but only actually access 2GB of it at any one time, then the caching requirements are not very high.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to