Hi Hari,
thank you very much for the explanation and for your important support.
Best regards,
Mauro
> Il giorno 11 set 2018, alle ore 10:49, Hari Gowtham ha
> scritto:
>
> Hi Mauro,
>
> It was because the quota crawl takes some time and it was working on it.
> When we ran the fix-issues it
Hi Mauro,
It was because the quota crawl takes some time and it was working on it.
When we ran the fix-issues it makes changes to the backend and does a lookup.
It takes time for the whole thing to reflect in the quota list command.
Earlier, it didnt reflect as it was still crawling. So this is th
Hi Hari,
good news for us!
A few seconds ago, I submitted the gluster quota list command in order to save
the current quota status.
[root@s01 auto]# gluster volume quota tier2 list /ASC
Path Hard-limit Soft-limit Used
Available Soft-limit exceeded?
Hi Hari,
thank you very much for your support.
I will do everything you suggested and I will contact you as soon as all the
steps will be completed.
Thank you,
Mauro
> Il giorno 10 set 2018, alle ore 16:02, Hari Gowtham ha
> scritto:
>
> Hi Mauro,
>
> I went through the log file you have s
Hi Mauro,
I went through the log file you have shared.
I don't find any mismatch.
This can be because of various reasons:
1) the accounting which was wrong is now fine. but as per your comment
above if this is the case,
then the crawl should still be happening which is why the its not yet
reflect
Dear Hari,the log files that I attached to my last mail have been generated running quota-fsck script after deleting the files.The quota-fsck script version that I used is the one in the following link https://review.gluster.org/#/c/19179/9..9/extras/quota/quota_fsck.pyI didn’t edit the log files,
On Mon, Sep 10, 2018 at 3:13 PM Mauro Tridici wrote:
>
>
> Dear Hari,
>
> I followed you suggestions, but, unfortunately, nothing is changed.
> I tried to execute both the quota-fsck script with —fix-issues options both
> the "setfattr -n trusted.glusterfs.quota.dirty -v 0x3100” command against t
Hi,
Looking at the logs, I can see that the file:
/orientgate/ftp/climate/3_urban_adaptation_health/6_budapest_veszprem_hungary/RHMSS_CMCC-CM_NMMB_Balkan_8km_1971-2005
/orientgate/ftp/climate/3_urban_adaptation_health/6_budapest_veszprem_hungary/RHMSS_ERA40_NMMB_Balkan_8km_1971-2000
/orientgate/f
Hi Hari,
thank you very much for your help.
I will try to use the latest available version of quota_fsck script and I will
provide you a feedback as soon as possible.
Thank you again for the detailed explanation.
Regards,
Mauro
> Il giorno 10 set 2018, alle ore 09:17, Hari Gowtham ha
> scrit
Hi Mauro,
The problem might be at some other place, So setting the xattr and
doing the lookup might not have fixed the issue.
To resolve this we need to read the log file reported by the fsck
script. In this log file we need to look for the size reported by the
xattr (the value "SIZE:" in the log
Hi Hari, Hi Sanoj,
thank you very much for your patience and your support!
The problem has been solved following your instructions :-)
N.B.: in order to reduce the running time, I executed the “du” command as
follows:
for i in {1..12}
do
du /gluster/mnt$i/brick/CSP/ans004/ftp
done
and not on
Hi,
There was a accounting issue in your setup.
The directory ans004/ftp/CMCC-CM2-VHR4-CTR/atm/hist and ans004/ftp/CMCC-CM2-VHR4
had wrong size value on them.
To fix it, you will have to set dirty xattr (an internal gluster
xattr) on these directories
which will mark it for calculating the values
Hi Hari,sorry for the late.Yes, the gluster volume is a single volume that is spread between all the 3 node and has 36 bricksIn attachment you can find a tar.gz file containing:- gluster volume status command output;- gluster volume info command output;- the output of the following script execution
Hi Mauro,
Can you send the gluster v status command output?
Is it a single volume that is spread between all the 3 node and has 36 bricks?
If yes, you will have to run on all the bricks.
In the command use sub-dir option if you are running only for the
directory where limit is set. else if you a
Hi Hari,
thank you very much for your answer.
I will try to use the script mentioned above pointing to each backend bricks.
So, if I understand, since I have a gluster cluster composed by 3 nodes (with
12 bricks on each node), I have to execute the script 36 times. Right?
You can find below t
Hi,
There is no explicit command to backup all the quota limits as per my
understanding. need to look further about this.
But you can do the following to backup and set it.
Gluster volume quota volname list which will print all the quota
limits on that particular volume.
You will have to make a no
Hi Sanoj,
could you provide me the command that I need in order to backup all quota
limits?
If there is no solution for this kind of problem, I would like to try to follow
your “backup” suggestion.
Do you think that I should contact gluster developers too?
Thank you very much.
Regards,
Mauro
Hi Sanoj,
unfortunately the output of the command execution was not helpful.
[root@s01 ~]# find /tier2/CSP/ans004 | xargs getfattr -d -m. -e hex
[root@s01 ~]#
Do you have some other idea in order to detect the cause of the issue?
Thank you again,
Mauro
> Il giorno 05 lug 2018, alle ore 09:0
Hi Mauro,
A script issue did not capture all necessary xattr.
Could you provide the xattrs with..
find /tier2/CSP/ans004 | xargs getfattr -d -m. -e hex
Meanwhile, If you are being impacted, you could do the following
back up quota limits
disable quota
enable quota
freshly set the limits.
Please
Dear Sanoj,thank you very much for your support.I just downloaded and executed the script you suggested.This is the full command I executed:./quota_fsck_new.py --full-logs --sub-dir /tier2/CSP/ans004/ /glusterIn attachment, you can find the logs generated by the script.What can I do now?Thank you v
Hi Mauro,
This may be an issue with update of backend xattrs.
To RCA further and provide resolution could you provide me with the logs by
running the following fsck script.
https://review.gluster.org/#/c/19179/6/extras/quota/quota_fsck.py
Try running the script and revert with the logs generated.
Dear Users,
I just noticed that, after some data deletions executed inside
"/tier2/CSP/ans004” folder, the amount of used disk reported by quota command
doesn’t reflect the value indicated by du command.
Surfing on the web, it seems that it is a bug of previous versions of Gluster
FS and it was
22 matches
Mail list logo