The problem must have started because of an upgrade to 3.7.12 from an older 
version. Not sure exactly how.

> On Aug 30, 2016, at 10:44 AM, Sergei Gerasenko <gera...@gmail.com> wrote:
> 
> It seems that it did the trick. The usage is being recalculated. I’m glad to 
> be posting a solution to the original problem on this thread. It’s so 
> frequent that threads contain only incomplete or partially complete solutions.
> 
> Thanks,
>   Sergei
> 
>> On Aug 29, 2016, at 3:41 PM, Sergei Gerasenko <sgerasenk...@gmail.com 
>> <mailto:sgerasenk...@gmail.com>> wrote:
>> 
>> I found an informative thread on a similar problem:
>> 
>> http://www.spinics.net/lists/gluster-devel/msg18400.html 
>> <http://www.spinics.net/lists/gluster-devel/msg18400.html>
>> 
>> According to the thread, it seems that the solution is to disable the quota, 
>> which will clear the relevant xattrs and then re-enable the quota which 
>> should force a recalc. I will try this tomorrow. 
>> 
>> On Thu, Aug 11, 2016 at 9:31 AM, Sergei Gerasenko <gera...@gmail.com 
>> <mailto:gera...@gmail.com>> wrote:
>> Hi Selvaganesh,
>> 
>> Thanks so much for your help. I didn’t have that option on probably because 
>> I originally had a lower version of cluster and then upgraded. I turned the 
>> option on just now.
>> 
>> The usage is still off. Should I wait a certain time?
>> 
>> Thanks,
>>   Sergei
>> 
>>> On Aug 9, 2016, at 7:26 AM, Manikandan Selvaganesh <mselv...@redhat.com 
>>> <mailto:mselv...@redhat.com>> wrote:
>>> 
>>> Hi Sergei,
>>> 
>>> When quota is enabled, quota-deem-statfs should be set to ON(By default 
>>> with the recent versions). But apparently 
>>> from your 'gluster v info' output, it is like quota-deem-statfs is not on. 
>>> 
>>> Could you please check and confirm the same on 
>>> /var/lib/glusterd/vols/<VOLNAME>/info. If you do not find an option 
>>> 'features.quota-deem-statfs=on', then this feature is turned off. Did you 
>>> turn off this one? You could turn it on by doing this
>>> 'gluster volume set <VOLNAME> quota-deem-statfs on'.
>>> 
>>> To know more about this feature, please refer here[1] 
>>> 
>>> [1] 
>>> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Directory%20Quota/
>>>  
>>> <https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Directory%20Quota/>
>>>  
>>> 
>>> 
>>> On Tue, Aug 9, 2016 at 5:43 PM, Sergei Gerasenko <gera...@gmail.com 
>>> <mailto:gera...@gmail.com>> wrote:
>>> Hi ,
>>> 
>>> The gluster version is 3.7.12. Here’s the output of `gluster info`:
>>> 
>>> Volume Name: ftp_volume
>>> Type: Distributed-Replicate
>>> Volume ID: SOME_VOLUME_ID
>>> Status: Started
>>> Number of Bricks: 3 x 2 = 6
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: host03:/data/ftp_gluster_brick
>>> Brick2: host04:/data/ftp_gluster_brick
>>> Brick3: host05:/data/ftp_gluster_brick
>>> Brick4: host06:/data/ftp_gluster_brick
>>> Brick5: host07:/data/ftp_gluster_brick
>>> Brick6: host08:/data/ftp_gluster_brick
>>> Options Reconfigured:
>>> features.quota: on
>>> 
>>> Thanks for the reply!! I thought nobody would reply at this point :)
>>> 
>>> Sergei
>>> 
>>>> On Aug 9, 2016, at 6:03 AM, Manikandan Selvaganesh <mselv...@redhat.com 
>>>> <mailto:mselv...@redhat.com>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> Sorry, I missed the mail. May I know which version of gluster you are 
>>>> using and please paste the output of
>>>> gluster v info?
>>>> 
>>>> On Sat, Aug 6, 2016 at 8:19 AM, Sergei Gerasenko <gera...@gmail.com 
>>>> <mailto:gera...@gmail.com>> wrote:
>>>> Hi,
>>>> 
>>>> I'm playing with quotas and the quota list command on one of the 
>>>> directories claims it uses 3T, whereas the du command says only 512G is 
>>>> used.
>>>> 
>>>> Anything I can do to force a re-calc, re-crawl, etc?
>>>> 
>>>> Thanks,
>>>>  Sergei
>>>> 
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
>>>> http://www.gluster.org/mailman/listinfo/gluster-users 
>>>> <http://www.gluster.org/mailman/listinfo/gluster-users>
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> Regards,
>>>> Manikandan Selvaganesh.
>>> 
>>> 
>>> 
>>> 
>>> -- 
>>> Regards,
>>> Manikandan Selvaganesh.
>> 
>> 
> 

_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://www.gluster.org/mailman/listinfo/gluster-users

Reply via email to