Are there a lot of inodes moved to lost+found by the fsck, which contribute to 
the occupied quota now?

----- Ursprüngliche Mail -----
Von: Fernando Pérez <fpe...@icm.csic.es>
An: lustre-discuss@lists.lustre.org
Gesendet: Tue, 16 Apr 2019 16:24:13 +0200 (CEST)
Betreff: Re: [lustre-discuss] lfsck repair quota

Thank you Rick.

I followed these steps for the ldiskfs OSTs and MDT, but the quotes for all 
users is more corrupted than before.

I tried to run e2fsck in ldiskfs OSTs MDT, but the problem was the MDT e2fsck 
ran very slow ( 10 inodes per second for more than 100 million inodes).

According to the lustre wiki I though that the lfsck could repair corrupted 
quotes:

http://wiki.lustre.org/Lustre_Quota_Troubleshooting

Regards.

============================================
Fernando Pérez
Institut de Ciències del Mar (CSIC)
Departament Oceanografía Física i Tecnològica
Passeig Marítim de la Barceloneta,37-49
08003 Barcelona
Phone:  (+34) 93 230 96 35
============================================

> El 16 abr 2019, a las 15:34, Mohr Jr, Richard Frank (Rick Mohr) 
> <rm...@utk.edu> escribió:
> 
> 
>> On Apr 15, 2019, at 10:54 AM, Fernando Perez <fpe...@icm.csic.es> wrote:
>> 
>> Could anyone confirm me that the correct way to repair wrong quotes in a 
>> ldiskfs mdt is lctl lfsck_start -t layout -A?
> 
> As far as I know, lfsck doesn’t repair quota info. It only fixes internal 
> consistency within Lustre.
> 
> Whenever I have had to repair quotas, I just follow the procedure you did 
> (unmount everything, run “tune2fs -O ^quota <dev>”, run “tune2fs -O quota 
> <dev>”, and then remount).  But all my systems used ldiskfs, so I don’t know 
> if the ZFS OSTs introduce any sort of complication.  (Actually, I am not even 
> sure if/how you can regenerate quota info for ZFS.)
> 
> --
> Rick Mohr
> Senior HPC System Administrator
> National Institute for Computational Sciences
> http://www.nics.tennessee.edu
> 

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to