> On Oct 15, 2019, at 9:52 AM, Tamas Kazinczy <tamas.kazin...@kifu.gov.hu> 
> wrote:
> 
> With defaults (1024 for inode size and 2560 for inode ratio) I get only 4,8T 
> usable space.

With those values, an inode is created for every 2560 bytes of MDT space.  
Since the inode is 1024 bytes, that leaves (2560 - 1024) = 1536 bytes of usable 
space out of every 2560 bytes (which is 60%).  So for an 8TB MDT, you get 8 * 
0.6 = 4.8 TB usable space.

> Increasing inode ratio gives more usable space up to 7,9T at 65536.

Increasing the inode ratio will result in much fewer inodes being created, but 
more usable space.  Using a ratio of 65536 will make about 98% of your space 
usable, so for an 8TB MDT that corresponds to about 7.9 TB.

The choice you make will depend on how your MDT is used.  If you want to use 
the Data-on-MDT feature to store file data directly on the MDT, then you might 
want more usable space.  Keep in mind though that this will reduce the number 
of inodes you have, and if you run out of inodes, you cannot easily add more 
inodes to your MDT. (You would probably need to look in into adding another MDT 
in that case.)  Running out of inodes means that you can’t create new Lustre 
files even if you still have space left on the OSTs.  At the other end of the 
spectrum, if you think you will have lots of small files, then you could 
decrease the ratio to 2048 to get some more inodes.  If in doubt, I think the 
Lustre default values are pretty reasonable.

At LUG this year, I helped present a tutorial along with Dustin Leverman 
covering some sys admin aspects of Lustre.  One of the things I talked about 
was inode calculations.  It might have some useful info for you (slides are 
here: 
http://cdn.opensfs.org/wp-content/uploads/2019/07/LUG2019-Sysadmin-tutorial.pdf).

—
Rick Mohr
Senior HPC System Administrator
National Institute for Computational Sciences
University of Tennessee



_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to