On Oct 3, 2019, at 07:55, Degremont, Aurelien
mailto:degre...@amazon.com>> wrote:
Hello all!
This doc from the wiki says "Lustre can support up to 2000 OSS per file system"
(http://wiki.lustre.org/Lustre_Server_Requirements_Guidelines).
I'm a bit surprised by this statement. Does somebody has
On Oct 3, 2019, at 20:09, Hebenstreit, Michael
mailto:michael.hebenstr...@intel.com>> wrote:
So bottom line – don’t change the default values, it won’t get better?
Like I wrote previously, there *are* no default/tunable values to change for
ZFS. The tunables are only for ldiskfs, which
So bottom line - don't change the default values, it won't get better?
From: Andreas Dilger
Sent: Thursday, October 03, 2019 19:38
To: Hebenstreit, Michael
Cc: Mohr Jr, Richard Frank ; lustre-discuss@lists.lustre.org
Subject: Re: [lustre-discuss] changing inode size on MDT
On Oct 3, 2019, at
On Oct 3, 2019, at 05:03, Hebenstreit, Michael
mailto:michael.hebenstr...@intel.com>> wrote:
So you are saying on a zfs based Lustre there is no way to increase the number
of available inodes? I have 8TB MDT with roughly 17G inodes
[root@elfsa1m1 ~]# df -h
Filesystem Size Used Avail
Please note the differences between inodes on the ZFS and inodes on the mdt
lustre. In the previous incarnation the file system run out of ionodes as
reported on the Lustre, even though the mdt was only half filled and zfs
backend still reported free inodes.
From: Shaun Tancheff
Sent:
Hello all!
This doc from the wiki says "Lustre can support up to 2000 OSS per file system"
(http://wiki.lustre.org/Lustre_Server_Requirements_Guidelines).
I'm a bit surprised by this statement. Does somebody has information about the
upper limit to the number of OSSes?
Or what could be the
Hi,
A little pedantic but for ‘inodes’ don’t exist in a zfs pool per-se. So the
code which attempts to report the number of inodes used/available guesses based
on the average per-object utilization rate. If you have a many large files your
reported number of inodes goes down faster than if you
As Andreas said "it is not relevant for ZFS since ZFS dynamically allocates
inodes and blocks as needed"
"as needed" is the important part. In your example, your MDT is almost empty,
so 17G inodes for an empty MDT seems pretty sufficient.
As you will create new files and use these inodes, you
So you are saying on a zfs based Lustre there is no way to increase the number
of available inodes? I have 8TB MDT with roughly 17G inodes
[root@elfsa1m1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
mdt 8.3T 256K 8.3T 1% /mdt
[root@elfsa1m1 ~]# df -i