Hi Srijan,
After a 3rd run of the quota_fsck script, the quotas got fixed! Working
normally again.
Thank you for your help!
*João Baúto*
---
*Scientific Computing and Software Platform*
Champalimaud Research
Champalimaud Center for the Unknown
Av. Brasília, Doca de Pedrouços
Hi João,
I'd recommend to go with the disable/enable of the quota as that'd
eventually do the same thing. Rather than manually changing the parameters
in the said command, that would be the better option.
--
Thanks and Regards,
SRIJAN SIVAKUMAR
Associate Software Engineer
Red Hat
Hi Srijan,
Before I do the disable/enable just want to check something with you. The
other cluster where the crawling is running, I can see the find command and
this one which seems to be the one triggering the crawler (4 processes, one
per brick in all nodes)
/usr/sbin/glusterfs -s localhost
Hi João,
If the crawl is not going on and the values are still not reflecting
properly then it means the crawl process has ended abruptly.
Yes, technically disabling and enabling the quota will trigger crawl but
it'd do a complete crawl of the filesystem, hence would take time and be
resource
Hi Srijan,
I didn't get any result with that command so I went to our other cluster
(we are merging two clusters, data is replicated) and activated the quota
feature on the same directory. Running the same command on each node I get
a similar output to yours. One process per brick I'm assuming.
Hi Srijan,
Is there a way of getting the status of the crawl process?
We are going to expand this cluster, adding 12 new bricks (around 500TB)
and we rely heavily on the quota feature to control the space usage for
each project. It's been running since Saturday (nothing changed) and unsure
if
Hi João,
Yes it'll take some time given the file system size as it has to change the
xattrs in each level and then crawl upwards.
stat is done by the script itself so the crawl is initiated.
Regards,
Srijan Sivakumar
On Sun 16 Aug, 2020, 04:58 João Baúto,
wrote:
> Hi Srijan & Strahil,
>
> I
Hi Srijan & Strahil,
I ran the quota_fsck script mentioned in Hari's blog post in all bricks and
it detected a lot of size mismatch.
The script was executed as,
- python quota_fsck.py --sub-dir projectB --fix-issues /mnt/tank
/tank/volume2/brick (in all nodes and bricks)
Here is a
Hi João,
The quota accounting error is what we're looking at here. I think you've
already looked into the blog post by Hari and are using the script to fix
the accounting issue.
That should help you out in fixing this issue.
Let me know if you face any issues while using it.
Regards,
Srijan
Hi João,
most probably enable/disable should help.
Have you checked all bricks on the ZFS ?
Your example is for projectA vs ProjectB.
What about 'ProjectB' directories on all bricks of the volume ?
If enable/disable doesn't help, I have an idea but I have never test it, so I
can't
Hi João,
Based on your output it seems that the quota size is different on the 2 bricks.
Have you tried to remove the quota and then recreate it ? Maybe it will be the
easiest way to fix it.
Best Regards,
Strahil Nikolov
На 14 август 2020 г. 4:35:14 GMT+03:00, "João Baúto"
написа:
>Hi
Hi Strahil,
I have tried removing the quota for that specific directory and setting it
again but it didn't work (maybe it has to be a quota disable and enable
in the volume options). Currently testing a solution
by Hari with the quota_fsck.py script (https://medium.com/@harigowtham/
Hi all,
We have a 4-node distributed cluster with 2 bricks per node running Gluster
7.7 + ZFS. We use directory quota to limit the space used by our members on
each project. Two days ago we noticed inconsistent space used reported by
Gluster in the quota list.
A small snippet of gluster volume
13 matches
Mail list logo