Hello Andreas,
lfs df -i reports 19,204,412 inodes used. When I did the full robinhood scan,
it reported scanning 18,673,874 entries, so fairly close.
I don’t have a .lustre directory at the filesystem root.
Another interesting aspect of this particular issue is I can run lctl lfsck and
every
There is a $ROOT/.lustre/lost+found that you could check.
What does "lfs df -i" report for the used inode count? Maybe it is RBH that is
reporting the wrong count?
The other alternative would be to mount the MDT filesystem directly as type ZFS
and see what df -i and find report?
Cheers, An
OK, I disabled, waited for a while, then reenabled. I still get the same
numbers. The only thing I can think is somehow the count is correct, despite
the huge difference. Robinhood and find show about 1.7M files, dirs, and links.
The quota is showing a bit over 3.1M inodes used. We only have one
Thanks, I will look into the ZFS quota since we are using ZFS for all storage,
MDT and OSTs.
In our case, there is a single MDS/MDT. I have used Robinhood and lfs find (by
group) commands to verify what the numbers should apparently be.
—
Dan Szkola
FNAL
> On Oct 9, 2023, at 10:13 AM, Andreas
The quota accounting is controlled by the backing filesystem of the OSTs and
MDTs.
For ldiskfs/ext4 you could run e2fsck to re-count all of the inode and block
usage.
For ZFS you would have to ask on the ZFS list to see if there is some way to
re-count the quota usage.
The "inode" quota is
Is there really no way to force a recount of files used by the quota? All
indications are we have accounts where files were removed and this is not
reflected in the used file count in the quota. The space used seems correct but
the inodes used numbers are way high. There must be a way to clear t
Also, quotas on the OSTS don’t add up to near 3 million files either:
[root@lustreclient scratch]# ssh ossnode0 lfs quota -g somegroup -I 0 /lustre1
Disk quotas for grp somegroup (gid 9544):
Filesystem kbytes quota limit grace files quota limit grace
1394853459
Hi Dan,
Ah, I see. Sorry, no idea - it's been a few years since I last used ZFS,
and I've never used the Lustre ZFS backend.
Regards,
Mark
On Wed, 4 Oct 2023, Daniel Szkola wrote:
[EXTERNAL EMAIL]
Hi Mark,
All nodes are using ZFS. OSTs, MDT, and MGT are all ZFS-based, so there's
really n
Hi Mark,
All nodes are using ZFS. OSTs, MDT, and MGT are all ZFS-based, so there's
really no way to fsck them. I could do a scrub, but that's not the same
thing. Is there a Lustre/ZFS equivalent of 'tune2fs -O [^]quota' for ZFS?
I'm guessing that at some point, a large number of files was removed
Hi Dan,
I think it gets corrected when you umount and fsck the OST's themselves
(not lfsck). At least I recall seeing such messages when fsck'ing on 2.12.
Best,
Mark
On Wed, 4 Oct 2023, Daniel Szkola via lustre-discuss wrote:
[EXTERNAL EMAIL]
No combination of lfsck runs has helped with t
No combination of lfsck runs has helped with this.
Again, robinhood shows 1796104 files for the group, an 'lfs find -G gid' found
1796104 files as well.
So why is the quota command showing over 3 million inodes used?
There must be a way to force it to recount or clear all stale quota data and
11 matches
Mail list logo