Hi,

Thanks for the link to the bug. We should be hopefully moving soon onto 3.12 so 
I guess this bug is also fixed there.

Best regards,
M.
​

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On February 27, 2018 9:38 AM, Hari Gowtham <hgowt...@redhat.com> wrote:

> ​​
> 
> Hi Mabi,
> 
> The bugs is fixed from 3.11. For 3.10 it is yet to be backported and
> 
> made available.
> 
> The bug is https://bugzilla.redhat.com/show_bug.cgi?id=1418259.
> 
> On Sat, Feb 24, 2018 at 4:05 PM, mabi m...@protonmail.ch wrote:
> 
> > Dear Hari,
> > 
> > Thank you for getting back to me after having analysed the problem.
> > 
> > As you said I tried to run "gluster volume quota <VOLNAME> list <PATH>" for 
> > all of my directories which have a quota and found out that there was one 
> > directory quota which was missing (stale) as you can see below:
> > 
> > $ gluster volume quota myvolume list /demo.domain.tld
> > 
> > Path Hard-limit Soft-limit Used Available Soft-limit exceeded? Hard-limit 
> > exceeded?
> > 
> > 
> > ----------------------------------------------------------------------------------------------------------------------------------------------
> > 
> > /demo.domain.tld N/A N/A 8.0MB N/A N/A N/A
> > 
> > So as you suggest I added again the quota on that directory and now the 
> > "list" finally works again and show the quotas for every directories as I 
> > defined them. That did the trick!
> > 
> > Now do you know if this bug is already corrected in a new release of 
> > GlusterFS? if not do you know when it will be fixed?
> > 
> > Again many thanks for your help here!
> > 
> > Best regards,
> > 
> > M.
> > 
> > ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
> > 
> > On February 23, 2018 7:45 AM, Hari Gowtham hgowt...@redhat.com wrote:
> > 
> > > Hi,
> > > 
> > > There is a bug in 3.10 which doesn't allow the quota list command to
> > > 
> > > output, if the last entry on the conf file is a stale entry.
> > > 
> > > The workaround for this is to remove the stale entry at the end. (If
> > > 
> > > the last two entries are stale then both have to be removed and so on
> > > 
> > > until the last entry on the conf file is a valid entry).
> > > 
> > > This can be avoided by adding a new limit. As the new limit you added
> > > 
> > > didn't work there is another way to check this.
> > > 
> > > Try quota list command with a specific limit mentioned in the command.
> > > 
> > > gluster volume quota <VOLNAME> list <PATH>
> > > 
> > > Make sure this path and the limit are set.
> > > 
> > > If this works then you need to clean up the last stale entry.
> > > 
> > > If this doesn't work we need to look further.
> > > 
> > > Thanks Sanoj for the guidance.
> > > 
> > > On Wed, Feb 14, 2018 at 1:36 AM, mabi m...@protonmail.ch wrote:
> > > 
> > > > I tried to set the limits as you suggest by running the following 
> > > > command.
> > > > 
> > > > $ sudo gluster volume quota myvolume limit-usage /directory 200GB
> > > > 
> > > > volume quota : success
> > > > 
> > > > but then when I list the quotas there is still nothing, so nothing 
> > > > really happened.
> > > > 
> > > > I also tried to run stat on all directories which have a quota but 
> > > > nothing happened either.
> > > > 
> > > > I will send you tomorrow all the other logfiles as requested.
> > > > 
> > > > \-\-\-\-\-\-\-\- Original Message --------
> > > > 
> > > > On February 13, 2018 12:20 PM, Hari Gowtham hgowt...@redhat.com wrote:
> > > > 
> > > > > Were you able to set new limits after seeing this error?
> > > > > 
> > > > > On Tue, Feb 13, 2018 at 4:19 PM, Hari Gowtham hgowt...@redhat.com 
> > > > > wrote:
> > > > > 
> > > > > > Yes, I need the log files in that duration, the log rotated file 
> > > > > > after
> > > > > > 
> > > > > > hitting the
> > > > > > 
> > > > > > issue aren't necessary, but the ones before hitting the issues are 
> > > > > > needed
> > > > > > 
> > > > > > (not just when you hit it, the ones even before you hit it).
> > > > > > 
> > > > > > Yes, you have to do a stat from the client through fuse mount.
> > > > > > 
> > > > > > On Tue, Feb 13, 2018 at 3:56 PM, mabi m...@protonmail.ch wrote:
> > > > > > 
> > > > > > > Thank you for your answer. This problem seem to have started 
> > > > > > > since last week, so should I also send you the same log files but 
> > > > > > > for last week? I think logrotate rotates them on a weekly basis.
> > > > > > > 
> > > > > > > The only two quota commands we use are the following:
> > > > > > > 
> > > > > > > gluster volume quota myvolume limit-usage /directory 10GB
> > > > > > > 
> > > > > > > gluster volume quota myvolume list
> > > > > > > 
> > > > > > > basically to set a new quota or to list the current quotas. The 
> > > > > > > quota list was working in the past yes but we already had a 
> > > > > > > similar issue where the quotas disappeared last August 2017:
> > > > > > > 
> > > > > > > http://lists.gluster.org/pipermail/gluster-users/2017-August/031946.html
> > > > > > > 
> > > > > > > In the mean time the only thing we did is to upgrade from 3.8 to 
> > > > > > > 3.10.
> > > > > > > 
> > > > > > > There are actually no errors to be seen using any gluster 
> > > > > > > commands. The "quota myvolume list" returns simply nothing.
> > > > > > > 
> > > > > > > In order to lookup the directories should I run a "stat" on them? 
> > > > > > > and if yes should I do that on a client through the fuse mount?
> > > > > > > 
> > > > > > > \-\-\-\-\-\-\-\- Original Message --------
> > > > > > > 
> > > > > > > On February 13, 2018 10:58 AM, Hari Gowtham hgowt...@redhat.com 
> > > > > > > wrote:
> > > > > > > 
> > > > > > > > The log provided are from 11th, you have seen the issue a while 
> > > > > > > > before
> > > > > > > > 
> > > > > > > > that itself.
> > > > > > > > 
> > > > > > > > The logs help us to know if something has actually went wrong.
> > > > > > > > 
> > > > > > > > once something goes wrong the output might get affected and i 
> > > > > > > > need to know what
> > > > > > > > 
> > > > > > > > went wrong. Which means i need the log from the beginning.
> > > > > > > > 
> > > > > > > > and i need to know a few more things,
> > > > > > > > 
> > > > > > > > Was the quota list command was working as expected at the 
> > > > > > > > beginning?
> > > > > > > > 
> > > > > > > > If yes, what were the commands issued, before you noticed this 
> > > > > > > > problem.
> > > > > > > > 
> > > > > > > > Is there any other error that you see other than this?
> > > > > > > > 
> > > > > > > > And can you try looking up the directories the limits are set 
> > > > > > > > on and
> > > > > > > > 
> > > > > > > > check if that fixes the error?
> > > > > > > > 
> > > > > > > > > \-\-\-\-\-\-\-\- Original Message --------
> > > > > > > > > 
> > > > > > > > > On February 13, 2018 10:44 AM, mabi m...@protonmail.ch wrote:
> > > > > > > > > 
> > > > > > > > > > Hi Hari,
> > > > > > > > > > 
> > > > > > > > > > Sure no problem, I will send you in a minute another mail 
> > > > > > > > > > where you can download all the relevant log files including 
> > > > > > > > > > the quota.conf binary file. Let me know if you need 
> > > > > > > > > > anything else. In the mean time here below is the output of 
> > > > > > > > > > a volume status.
> > > > > > > > > > 
> > > > > > > > > > Best regards,
> > > > > > > > > > 
> > > > > > > > > > M.
> > > > > > > > > > 
> > > > > > > > > > Status of volume: myvolume
> > > > > > > > > > 
> > > > > > > > > > Gluster process TCP Port RDMA Port Online Pid
> > > > > > > > > > 
> > > > > > > > > > Brick gfs1a.domain.local:/data/myvolume
> > > > > > > > > > 
> > > > > > > > > > /brick 49153 0 Y 3214
> > > > > > > > > > 
> > > > > > > > > > Brick gfs1b.domain.local:/data/myvolume
> > > > > > > > > > 
> > > > > > > > > > /brick 49154 0 Y 3256
> > > > > > > > > > 
> > > > > > > > > > Brick gfs1c.domain.local:/srv/glusterf
> > > > > > > > > > 
> > > > > > > > > > s/myvolume/brick 49153 0 Y 515
> > > > > > > > > > 
> > > > > > > > > > Self-heal Daemon on localhost N/A N/A Y 3186
> > > > > > > > > > 
> > > > > > > > > > Quota Daemon on localhost N/A N/A Y 3195
> > > > > > > > > > 
> > > > > > > > > > Self-heal Daemon on gfs1b.domain.local N/A N/A Y 3217
> > > > > > > > > > 
> > > > > > > > > > Quota Daemon on gfs1b.domain.local N/A N/A Y 3229
> > > > > > > > > > 
> > > > > > > > > > Self-heal Daemon on gfs1c.domain.local N/A N/A Y 486
> > > > > > > > > > 
> > > > > > > > > > Quota Daemon on gfs1c.domain.local N/A N/A Y 495
> > > > > > > > > > 
> > > > > > > > > > Task Status of Volume myvolume
> > > > > > > > > > 
> > > > > > > > > > There are no active volume tasks
> > > > > > > > > > 
> > > > > > > > > > \-\-\-\-\-\-\-\- Original Message --------
> > > > > > > > > > 
> > > > > > > > > > On February 13, 2018 10:09 AM, Hari Gowtham 
> > > > > > > > > > hgowt...@redhat.com wrote:
> > > > > > > > > > 
> > > > > > > > > > > Hi,
> > > > > > > > > > > 
> > > > > > > > > > > A part of the log won't be enough to debug the issue.
> > > > > > > > > > > 
> > > > > > > > > > > Need the whole log messages till date.
> > > > > > > > > > > 
> > > > > > > > > > > You can send it as attachments.
> > > > > > > > > > > 
> > > > > > > > > > > Yes the quota.conf is a binary file.
> > > > > > > > > > > 
> > > > > > > > > > > And I need the volume status output too.
> > > > > > > > > > > 
> > > > > > > > > > > On Tue, Feb 13, 2018 at 1:56 PM, mabi m...@protonmail.ch 
> > > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > > Hi Hari,
> > > > > > > > > > > > 
> > > > > > > > > > > > Sorry for not providing you more details from the 
> > > > > > > > > > > > start. Here below you will
> > > > > > > > > > > > 
> > > > > > > > > > > > find all the relevant log entries and info. Regarding 
> > > > > > > > > > > > the quota.conf file I
> > > > > > > > > > > > 
> > > > > > > > > > > > have found one for my volume but it is a binary file. 
> > > > > > > > > > > > Is it supposed to be
> > > > > > > > > > > > 
> > > > > > > > > > > > binary or text?
> > > > > > > > > > > > 
> > > > > > > > > > > > Regards,
> > > > > > > > > > > > 
> > > > > > > > > > > > M.
> > > > > > > > > > > > 
> > > > > > > > > > > > \*\*\* gluster volume info myvolume ***
> > > > > > > > > > > > 
> > > > > > > > > > > > Volume Name: myvolume
> > > > > > > > > > > > 
> > > > > > > > > > > > Type: Replicate
> > > > > > > > > > > > 
> > > > > > > > > > > > Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5
> > > > > > > > > > > > 
> > > > > > > > > > > > Status: Started
> > > > > > > > > > > > 
> > > > > > > > > > > > Snapshot Count: 0
> > > > > > > > > > > > 
> > > > > > > > > > > > Number of Bricks: 1 x (2 + 1) = 3
> > > > > > > > > > > > 
> > > > > > > > > > > > Transport-type: tcp
> > > > > > > > > > > > 
> > > > > > > > > > > > Bricks:
> > > > > > > > > > > > 
> > > > > > > > > > > > Brick1: gfs1a.domain.local:/data/myvolume/brick
> > > > > > > > > > > > 
> > > > > > > > > > > > Brick2: gfs1b.domain.local:/data/myvolume/brick
> > > > > > > > > > > > 
> > > > > > > > > > > > Brick3: 
> > > > > > > > > > > > gfs1c.domain.local:/srv/glusterfs/myvolume/brick 
> > > > > > > > > > > > (arbiter)
> > > > > > > > > > > > 
> > > > > > > > > > > > Options Reconfigured:
> > > > > > > > > > > > 
> > > > > > > > > > > > server.event-threads: 4
> > > > > > > > > > > > 
> > > > > > > > > > > > client.event-threads: 4
> > > > > > > > > > > > 
> > > > > > > > > > > > performance.readdir-ahead: on
> > > > > > > > > > > > 
> > > > > > > > > > > > nfs.disable: on
> > > > > > > > > > > > 
> > > > > > > > > > > > features.quota: on
> > > > > > > > > > > > 
> > > > > > > > > > > > features.inode-quota: on
> > > > > > > > > > > > 
> > > > > > > > > > > > features.quota-deem-statfs: on
> > > > > > > > > > > > 
> > > > > > > > > > > > transport.address-family: inet
> > > > > > > > > > > > 
> > > > > > > > > > > > \*\*\* /var/log/glusterfs/glusterd.log ***
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:09.929568\] E \[MSGID: 101042\] 
> > > > > > > > > > > > \[compat.c:569:gf\_umount\_lazy\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-management: Lazy unmount of 
> > > > > > > > > > > > /var/run/gluster/myvolume\_quota\_list/
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:28.596527\] I \[MSGID: 106499\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[glusterd-handler.c:4363:\_\_glusterd\_handle\_status\_volume\]
> > > > > > > > > > > >  0-management:
> > > > > > > > > > > > 
> > > > > > > > > > > > Received status volume req for volume myvolume
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:28.601097\] I \[MSGID: 106419\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[glusterd-utils.c:6110:glusterd\_add\_inode\_size\_to_dict\]
> > > > > > > > > > > >  0-management: the
> > > > > > > > > > > > 
> > > > > > > > > > > > brick on data/myvolume (zfs) uses dynamic inode sizes
> > > > > > > > > > > > 
> > > > > > > > > > > > \*\*\* /var/log/glusterfs/cmd_history.log ***
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:28.605478\] : volume status myvolume 
> > > > > > > > > > > > detail : SUCCESS
> > > > > > > > > > > > 
> > > > > > > > > > > > \*\*\* /var/log/glusterfs/quota-mount-myvolume.log ***
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:09.934117\] I \[MSGID: 100030\] 
> > > > > > > > > > > > \[glusterfsd.c:2503:main\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-/usr/sbin/glusterfs: Started running 
> > > > > > > > > > > > /usr/sbin/glusterfs version 3.10.7
> > > > > > > > > > > > 
> > > > > > > > > > > > (args: /usr/sbin/glusterfs --volfile-server localhost 
> > > > > > > > > > > > --volfile-id myvolume
> > > > > > > > > > > > 
> > > > > > > > > > > > -l /var/log/glusterfs/quota-mount-myvolume.log -p
> > > > > > > > > > > > 
> > > > > > > > > > > > /var/run/gluster/myvolume\_quota\_list.pid --client-pid 
> > > > > > > > > > > > -5
> > > > > > > > > > > > 
> > > > > > > > > > > > /var/run/gluster/myvolume\_quota\_list/)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:09.940432\] I \[MSGID: 101190\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[event-epoll.c:629:event\_dispatch\_epoll_worker\] 
> > > > > > > > > > > > 0-epoll: Started thread with
> > > > > > > > > > > > 
> > > > > > > > > > > > index 1
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:09.940491\] E 
> > > > > > > > > > > > \[socket.c:2327:socket\_connect\_finish\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-glusterfs: connection to ::1:24007 failed (Connection 
> > > > > > > > > > > > refused);
> > > > > > > > > > > > 
> > > > > > > > > > > > disconnecting socket
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:09.940519\] I 
> > > > > > > > > > > > \[glusterfsd-mgmt.c:2134:mgmt\_rpc\_notify\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-glusterfsd-mgmt: disconnected from remote-host: 
> > > > > > > > > > > > localhost
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.943827\] I 
> > > > > > > > > > > > \[afr.c:94:fix\_quorum\_options\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-replicate-0: reindeer: incoming qtype = none
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.943857\] I 
> > > > > > > > > > > > \[afr.c:116:fix\_quorum\_options\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-replicate-0: reindeer: quorum_count = 
> > > > > > > > > > > > 2147483647
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.945194\] I \[MSGID: 101190\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[event-epoll.c:629:event\_dispatch\_epoll_worker\] 
> > > > > > > > > > > > 0-epoll: Started thread with
> > > > > > > > > > > > 
> > > > > > > > > > > > index 2
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.945237\] I \[MSGID: 101190\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[event-epoll.c:629:event\_dispatch\_epoll_worker\] 
> > > > > > > > > > > > 0-epoll: Started thread with
> > > > > > > > > > > > 
> > > > > > > > > > > > index 4
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.945247\] I \[MSGID: 101190\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[event-epoll.c:629:event\_dispatch\_epoll_worker\] 
> > > > > > > > > > > > 0-epoll: Started thread with
> > > > > > > > > > > > 
> > > > > > > > > > > > index 3
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.945941\] W \[MSGID: 101174\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[graph.c:361:\_log\_if\_unknown\_option\] 
> > > > > > > > > > > > 0-myvolume-readdir-ahead: option
> > > > > > > > > > > > 
> > > > > > > > > > > > 'parallel-readdir' is not recognized
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.946342\] I \[MSGID: 114020\] 
> > > > > > > > > > > > \[client.c:2352:notify\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-0: parent translators are ready, 
> > > > > > > > > > > > attempting connect on
> > > > > > > > > > > > 
> > > > > > > > > > > > transport
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.946789\] I \[MSGID: 114020\] 
> > > > > > > > > > > > \[client.c:2352:notify\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-1: parent translators are ready, 
> > > > > > > > > > > > attempting connect on
> > > > > > > > > > > > 
> > > > > > > > > > > > transport
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.947151\] I 
> > > > > > > > > > > > \[rpc-clnt.c:2000:rpc\_clnt\_reconfig\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-0: changing port to 49153 (from 0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.947846\] I \[MSGID: 114057\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1451:select\_server\_supported_programs\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-0: Using Program GlusterFS 3.3, Num 
> > > > > > > > > > > > (1298437), Version
> > > > > > > > > > > > 
> > > > > > > > > > > > (330)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948073\] I \[MSGID: 114020\] 
> > > > > > > > > > > > \[client.c:2352:notify\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-2: parent translators are ready, 
> > > > > > > > > > > > attempting connect on
> > > > > > > > > > > > 
> > > > > > > > > > > > transport
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948579\] I 
> > > > > > > > > > > > \[rpc-clnt.c:2000:rpc\_clnt\_reconfig\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-1: changing port to 49154 (from 0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948699\] I \[MSGID: 114046\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1216:client\_setvolume\_cbk\] 
> > > > > > > > > > > > 0-myvolume-client-0:
> > > > > > > > > > > > 
> > > > > > > > > > > > Connected to myvolume-client-0, attached to remote 
> > > > > > > > > > > > volume
> > > > > > > > > > > > 
> > > > > > > > > > > > '/data/myvolume/brick'.
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948747\] I \[MSGID: 114047\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1227:client\_setvolume\_cbk\] 
> > > > > > > > > > > > 0-myvolume-client-0: Server
> > > > > > > > > > > > 
> > > > > > > > > > > > and Client lk-version numbers are not same, reopening 
> > > > > > > > > > > > the fds
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948842\] I \[MSGID: 108005\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[afr-common.c:4813:afr_notify\] 
> > > > > > > > > > > > 0-myvolume-replicate-0: Subvolume
> > > > > > > > > > > > 
> > > > > > > > > > > > 'myvolume-client-0' came back up; going online.
> > > > > > > > > > > > 
> > > > > > > > > > > > Final graph:
> > > > > > > > > > > > 
> > > > > > > > > > > > +------------------------------------------------------------------------------+
> > > > > > > > > > > > 
> > > > > > > > > > > > 1: volume myvolume-client-0
> > > > > > > > > > > > 
> > > > > > > > > > > > 2: type protocol/client
> > > > > > > > > > > > 
> > > > > > > > > > > > 3: option opversion 31004
> > > > > > > > > > > > 
> > > > > > > > > > > > 4: option clnt-lk-version 1
> > > > > > > > > > > > 
> > > > > > > > > > > > 5: option volfile-checksum 0
> > > > > > > > > > > > 
> > > > > > > > > > > > 6: option volfile-key myvolume
> > > > > > > > > > > > 
> > > > > > > > > > > > 7: option client-version 3.10.7
> > > > > > > > > > > > 
> > > > > > > > > > > > 8: option process-uuid
> > > > > > > > > > > > 
> > > > > > > > > > > > gfs1a-14348-2018/02/13-08:16:09:933625-myvolume-client-0-0-0
> > > > > > > > > > > > 
> > > > > > > > > > > > 9: option fops-version 1298437
> > > > > > > > > > > > 
> > > > > > > > > > > > 10: option ping-timeout 42
> > > > > > > > > > > > 
> > > > > > > > > > > > 11: option remote-host gfs1a.domain.local
> > > > > > > > > > > > 
> > > > > > > > > > > > 12: option remote-subvolume /data/myvolume/brick
> > > > > > > > > > > > 
> > > > > > > > > > > > 13: option transport-type socket
> > > > > > > > > > > > 
> > > > > > > > > > > > 14: option transport.address-family inet
> > > > > > > > > > > > 
> > > > > > > > > > > > 15: option username bea3e634-e174-4bb3-a1d6-25b09d03b536
> > > > > > > > > > > > 
> > > > > > > > > > > > 16: option password 3a6f98bd-795e-4ec4-adfe-42c61ccfa0a6
> > > > > > > > > > > > 
> > > > > > > > > > > > 17: option event-threads 4
> > > > > > > > > > > > 
> > > > > > > > > > > > 18: option transport.tcp-user-timeout 0
> > > > > > > > > > > > 
> > > > > > > > > > > > 19: option transport.socket.keepalive-time 20
> > > > > > > > > > > > 
> > > > > > > > > > > > 20: option transport.socket.keepalive-interval 2
> > > > > > > > > > > > 
> > > > > > > > > > > > 21: option transport.socket.keepalive-count 9
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.949147\] I \[MSGID: 114035\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:202:client\_set\_lk\_version\_cbk\]
> > > > > > > > > > > >  0-myvolume-client-0:
> > > > > > > > > > > > 
> > > > > > > > > > > > Server lk version = 1
> > > > > > > > > > > > 
> > > > > > > > > > > > 22: option send-gids true
> > > > > > > > > > > > 
> > > > > > > > > > > > 23: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 24:
> > > > > > > > > > > > 
> > > > > > > > > > > > 25: volume myvolume-client-1
> > > > > > > > > > > > 
> > > > > > > > > > > > 26: type protocol/client
> > > > > > > > > > > > 
> > > > > > > > > > > > 27: option ping-timeout 42
> > > > > > > > > > > > 
> > > > > > > > > > > > 28: option remote-host gfs1b.domain.local
> > > > > > > > > > > > 
> > > > > > > > > > > > 29: option remote-subvolume /data/myvolume/brick
> > > > > > > > > > > > 
> > > > > > > > > > > > 30: option transport-type socket
> > > > > > > > > > > > 
> > > > > > > > > > > > 31: option transport.address-family inet
> > > > > > > > > > > > 
> > > > > > > > > > > > 32: option username bea3e634-e174-4bb3-a1d6-25b09d03b536
> > > > > > > > > > > > 
> > > > > > > > > > > > 33: option password 3a6f98bd-795e-4ec4-adfe-42c61ccfa0a6
> > > > > > > > > > > > 
> > > > > > > > > > > > 34: option event-threads 4
> > > > > > > > > > > > 
> > > > > > > > > > > > 35: option transport.tcp-user-timeout 0
> > > > > > > > > > > > 
> > > > > > > > > > > > 36: option transport.socket.keepalive-time 20
> > > > > > > > > > > > 
> > > > > > > > > > > > 37: option transport.socket.keepalive-interval 2
> > > > > > > > > > > > 
> > > > > > > > > > > > 38: option transport.socket.keepalive-count 9
> > > > > > > > > > > > 
> > > > > > > > > > > > 39: option send-gids true
> > > > > > > > > > > > 
> > > > > > > > > > > > 40: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 41:
> > > > > > > > > > > > 
> > > > > > > > > > > > 42: volume myvolume-client-2
> > > > > > > > > > > > 
> > > > > > > > > > > > 43: type protocol/client
> > > > > > > > > > > > 
> > > > > > > > > > > > 44: option ping-timeout 42
> > > > > > > > > > > > 
> > > > > > > > > > > > 45: option remote-host gfs1c.domain.local
> > > > > > > > > > > > 
> > > > > > > > > > > > 46: option remote-subvolume 
> > > > > > > > > > > > /srv/glusterfs/myvolume/brick
> > > > > > > > > > > > 
> > > > > > > > > > > > 47: option transport-type socket
> > > > > > > > > > > > 
> > > > > > > > > > > > 48: option transport.address-family inet
> > > > > > > > > > > > 
> > > > > > > > > > > > 49: option username bea3e634-e174-4bb3-a1d6-25b09d03b536
> > > > > > > > > > > > 
> > > > > > > > > > > > 50: option password 3a6f98bd-795e-4ec4-adfe-42c61ccfa0a6
> > > > > > > > > > > > 
> > > > > > > > > > > > 51: option event-threads 4
> > > > > > > > > > > > 
> > > > > > > > > > > > 52: option transport.tcp-user-timeout 0
> > > > > > > > > > > > 
> > > > > > > > > > > > 53: option transport.socket.keepalive-time 20
> > > > > > > > > > > > 
> > > > > > > > > > > > 54: option transport.socket.keepalive-interval 2
> > > > > > > > > > > > 
> > > > > > > > > > > > 55: option transport.socket.keepalive-count 9
> > > > > > > > > > > > 
> > > > > > > > > > > > 56: option send-gids true
> > > > > > > > > > > > 
> > > > > > > > > > > > 57: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 58:
> > > > > > > > > > > > 
> > > > > > > > > > > > 59: volume myvolume-replicate-0
> > > > > > > > > > > > 
> > > > > > > > > > > > 60: type cluster/replicate
> > > > > > > > > > > > 
> > > > > > > > > > > > 61: option afr-pending-xattr
> > > > > > > > > > > > 
> > > > > > > > > > > > myvolume-client-0,myvolume-client-1,myvolume-client-2
> > > > > > > > > > > > 
> > > > > > > > > > > > 62: option arbiter-count 1
> > > > > > > > > > > > 
> > > > > > > > > > > > 63: option use-compound-fops off
> > > > > > > > > > > > 
> > > > > > > > > > > > 64: subvolumes myvolume-client-0 myvolume-client-1 
> > > > > > > > > > > > myvolume-client-2
> > > > > > > > > > > > 
> > > > > > > > > > > > 65: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.949442\] I 
> > > > > > > > > > > > \[rpc-clnt.c:2000:rpc\_clnt\_reconfig\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-2: changing port to 49153 (from 0)
> > > > > > > > > > > > 
> > > > > > > > > > > > 66:
> > > > > > > > > > > > 
> > > > > > > > > > > > 67: volume myvolume-dht
> > > > > > > > > > > > 
> > > > > > > > > > > > 68: type cluster/distribute
> > > > > > > > > > > > 
> > > > > > > > > > > > 69: option lock-migration off
> > > > > > > > > > > > 
> > > > > > > > > > > > 70: subvolumes myvolume-replicate-0
> > > > > > > > > > > > 
> > > > > > > > > > > > 71: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 72:
> > > > > > > > > > > > 
> > > > > > > > > > > > 73: volume myvolume-write-behind
> > > > > > > > > > > > 
> > > > > > > > > > > > 74: type performance/write-behind
> > > > > > > > > > > > 
> > > > > > > > > > > > 75: subvolumes myvolume-dht
> > > > > > > > > > > > 
> > > > > > > > > > > > 76: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 77:
> > > > > > > > > > > > 
> > > > > > > > > > > > 78: volume myvolume-read-ahead
> > > > > > > > > > > > 
> > > > > > > > > > > > 79: type performance/read-ahead
> > > > > > > > > > > > 
> > > > > > > > > > > > 80: subvolumes myvolume-write-behind
> > > > > > > > > > > > 
> > > > > > > > > > > > 81: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 82:
> > > > > > > > > > > > 
> > > > > > > > > > > > 83: volume myvolume-readdir-ahead
> > > > > > > > > > > > 
> > > > > > > > > > > > 84: type performance/readdir-ahead
> > > > > > > > > > > > 
> > > > > > > > > > > > 85: option parallel-readdir off
> > > > > > > > > > > > 
> > > > > > > > > > > > 86: option rda-request-size 131072
> > > > > > > > > > > > 
> > > > > > > > > > > > 87: option rda-cache-limit 10MB
> > > > > > > > > > > > 
> > > > > > > > > > > > 88: subvolumes myvolume-read-ahead
> > > > > > > > > > > > 
> > > > > > > > > > > > 89: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 90:
> > > > > > > > > > > > 
> > > > > > > > > > > > 91: volume myvolume-io-cache
> > > > > > > > > > > > 
> > > > > > > > > > > > 92: type performance/io-cache
> > > > > > > > > > > > 
> > > > > > > > > > > > 93: subvolumes myvolume-readdir-ahead
> > > > > > > > > > > > 
> > > > > > > > > > > > 94: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 95:
> > > > > > > > > > > > 
> > > > > > > > > > > > 96: volume myvolume-quick-read
> > > > > > > > > > > > 
> > > > > > > > > > > > 97: type performance/quick-read
> > > > > > > > > > > > 
> > > > > > > > > > > > 98: subvolumes myvolume-io-cache
> > > > > > > > > > > > 
> > > > > > > > > > > > 99: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 100:
> > > > > > > > > > > > 
> > > > > > > > > > > > 101: volume myvolume-open-behind
> > > > > > > > > > > > 
> > > > > > > > > > > > 102: type performance/open-behind
> > > > > > > > > > > > 
> > > > > > > > > > > > 103: subvolumes myvolume-quick-read
> > > > > > > > > > > > 
> > > > > > > > > > > > 104: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 105:
> > > > > > > > > > > > 
> > > > > > > > > > > > 106: volume myvolume-md-cache
> > > > > > > > > > > > 
> > > > > > > > > > > > 107: type performance/md-cache
> > > > > > > > > > > > 
> > > > > > > > > > > > 108: subvolumes myvolume-open-behind
> > > > > > > > > > > > 
> > > > > > > > > > > > 109: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 110:
> > > > > > > > > > > > 
> > > > > > > > > > > > 111: volume myvolume
> > > > > > > > > > > > 
> > > > > > > > > > > > 112: type debug/io-stats
> > > > > > > > > > > > 
> > > > > > > > > > > > 113: option log-level INFO
> > > > > > > > > > > > 
> > > > > > > > > > > > 114: option latency-measurement off
> > > > > > > > > > > > 
> > > > > > > > > > > > 115: option count-fop-hits off
> > > > > > > > > > > > 
> > > > > > > > > > > > 116: subvolumes myvolume-md-cache
> > > > > > > > > > > > 
> > > > > > > > > > > > 117: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 118:
> > > > > > > > > > > > 
> > > > > > > > > > > > 119: volume meta-autoload
> > > > > > > > > > > > 
> > > > > > > > > > > > 120: type meta
> > > > > > > > > > > > 
> > > > > > > > > > > > 121: subvolumes myvolume
> > > > > > > > > > > > 
> > > > > > > > > > > > 122: end-volume
> > > > > > > > > > > > 
> > > > > > > > > > > > 123:
> > > > > > > > > > > > 
> > > > > > > > > > > > +------------------------------------------------------------------------------+
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.949813\] I \[MSGID: 114057\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1451:select\_server\_supported_programs\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-1: Using Program GlusterFS 3.3, Num 
> > > > > > > > > > > > (1298437), Version
> > > > > > > > > > > > 
> > > > > > > > > > > > (330)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.950686\] I \[MSGID: 114057\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1451:select\_server\_supported_programs\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-client-2: Using Program GlusterFS 3.3, Num 
> > > > > > > > > > > > (1298437), Version
> > > > > > > > > > > > 
> > > > > > > > > > > > (330)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.950698\] I \[MSGID: 114046\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1216:client\_setvolume\_cbk\] 
> > > > > > > > > > > > 0-myvolume-client-1:
> > > > > > > > > > > > 
> > > > > > > > > > > > Connected to myvolume-client-1, attached to remote 
> > > > > > > > > > > > volume
> > > > > > > > > > > > 
> > > > > > > > > > > > '/data/myvolume/brick'.
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.950759\] I \[MSGID: 114047\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1227:client\_setvolume\_cbk\] 
> > > > > > > > > > > > 0-myvolume-client-1: Server
> > > > > > > > > > > > 
> > > > > > > > > > > > and Client lk-version numbers are not same, reopening 
> > > > > > > > > > > > the fds
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.951009\] I \[MSGID: 114035\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:202:client\_set\_lk\_version\_cbk\]
> > > > > > > > > > > >  0-myvolume-client-1:
> > > > > > > > > > > > 
> > > > > > > > > > > > Server lk version = 1
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.951320\] I \[MSGID: 114046\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1216:client\_setvolume\_cbk\] 
> > > > > > > > > > > > 0-myvolume-client-2:
> > > > > > > > > > > > 
> > > > > > > > > > > > Connected to myvolume-client-2, attached to remote 
> > > > > > > > > > > > volume
> > > > > > > > > > > > 
> > > > > > > > > > > > '/srv/glusterfs/myvolume/brick'.
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.951348\] I \[MSGID: 114047\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:1227:client\_setvolume\_cbk\] 
> > > > > > > > > > > > 0-myvolume-client-2: Server
> > > > > > > > > > > > 
> > > > > > > > > > > > and Client lk-version numbers are not same, reopening 
> > > > > > > > > > > > the fds
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.952626\] I \[MSGID: 114035\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client-handshake.c:202:client\_set\_lk\_version\_cbk\]
> > > > > > > > > > > >  0-myvolume-client-2:
> > > > > > > > > > > > 
> > > > > > > > > > > > Server lk version = 1
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.952704\] I 
> > > > > > > > > > > > \[fuse-bridge.c:4138:fuse_init\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-glusterfs-fuse: FUSE inited with protocol versions: 
> > > > > > > > > > > > glusterfs 7.24 kernel
> > > > > > > > > > > > 
> > > > > > > > > > > > 7.23
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.952742\] I 
> > > > > > > > > > > > \[fuse-bridge.c:4823:fuse\_graph\_sync\] 0-fuse:
> > > > > > > > > > > > 
> > > > > > > > > > > > switched to graph 0
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.954337\] I \[MSGID: 108031\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[afr-common.c:2357:afr\_local\_discovery_cbk\] 
> > > > > > > > > > > > 0-myvolume-replicate-0:
> > > > > > > > > > > > 
> > > > > > > > > > > > selecting local read_child myvolume-client-0
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.020681\] I 
> > > > > > > > > > > > \[fuse-bridge.c:5081:fuse\_thread\_proc\] 0-fuse:
> > > > > > > > > > > > 
> > > > > > > > > > > > unmounting /var/run/gluster/myvolume\_quota\_list/
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.020981\] W 
> > > > > > > > > > > > \[glusterfsd.c:1360:cleanup\_and\_exit\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/lib/x86_64-linux-gnu/libpthread.so.0(+0x8064) 
> > > > > > > > > > > > \[0x7f92aaeef064\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) 
> > > > > > > > > > > > \[0x55d8ddb85f75\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/sbin/glusterfs(cleanup\_and\_exit+0x57) 
> > > > > > > > > > > > \[0x55d8ddb85d97\] ) 0-:
> > > > > > > > > > > > 
> > > > > > > > > > > > received signum (15), shutting down
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.021017\] I 
> > > > > > > > > > > > \[fuse-bridge.c:5804:fini\] 0-fuse: Unmounting
> > > > > > > > > > > > 
> > > > > > > > > > > > '/var/run/gluster/myvolume\_quota\_list/'.
> > > > > > > > > > > > 
> > > > > > > > > > > > \*\*\* /var/log/glusterfs/quotad.log ***
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.023519\] W \[MSGID: 108027\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[afr-common.c:2718:afr\_discover\_done\] 
> > > > > > > > > > > > 0-myvolume-replicate-0: no read
> > > > > > > > > > > > 
> > > > > > > > > > > > subvols for (null)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.064410\] W 
> > > > > > > > > > > > \[dict.c:599:dict_unref\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x1f3d)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e7f3d\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x2d72)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e8d72\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86\_64-linux-gnu/libglusterfs.so.0(dict\_unref+0xc0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f631b9bd870\] ) 0-dict: dict is NULL \[Invalid 
> > > > > > > > > > > > argument\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.074683\] W 
> > > > > > > > > > > > \[dict.c:599:dict_unref\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x1f3d)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e7f3d\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x2d72)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e8d72\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86\_64-linux-gnu/libglusterfs.so.0(dict\_unref+0xc0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f631b9bd870\] ) 0-dict: dict is NULL \[Invalid 
> > > > > > > > > > > > argument\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.082040\] W 
> > > > > > > > > > > > \[dict.c:599:dict_unref\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x1f3d)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e7f3d\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x2d72)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e8d72\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86\_64-linux-gnu/libglusterfs.so.0(dict\_unref+0xc0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f631b9bd870\] ) 0-dict: dict is NULL \[Invalid 
> > > > > > > > > > > > argument\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.086950\] W 
> > > > > > > > > > > > \[dict.c:599:dict_unref\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x1f3d)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e7f3d\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x2d72)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e8d72\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86\_64-linux-gnu/libglusterfs.so.0(dict\_unref+0xc0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f631b9bd870\] ) 0-dict: dict is NULL \[Invalid 
> > > > > > > > > > > > argument\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.087939\] W 
> > > > > > > > > > > > \[dict.c:599:dict_unref\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x1f3d)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e7f3d\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x2d72)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e8d72\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86\_64-linux-gnu/libglusterfs.so.0(dict\_unref+0xc0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f631b9bd870\] ) 0-dict: dict is NULL \[Invalid 
> > > > > > > > > > > > argument\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.089775\] W 
> > > > > > > > > > > > \[dict.c:599:dict_unref\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x1f3d)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e7f3d\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x2d72)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e8d72\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86\_64-linux-gnu/libglusterfs.so.0(dict\_unref+0xc0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f631b9bd870\] ) 0-dict: dict is NULL \[Invalid 
> > > > > > > > > > > > argument\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.092937\] W 
> > > > > > > > > > > > \[dict.c:599:dict_unref\]
> > > > > > > > > > > > 
> > > > > > > > > > > > (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x1f3d)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e7f3d\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86_64-linux-gnu/glusterfs/3.10.7/xlator/features/quotad.so(+0x2d72)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f63153e8d72\]
> > > > > > > > > > > > 
> > > > > > > > > > > > -->/usr/lib/x86\_64-linux-gnu/libglusterfs.so.0(dict\_unref+0xc0)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[0x7f631b9bd870\] ) 0-dict: dict is NULL \[Invalid 
> > > > > > > > > > > > argument\]
> > > > > > > > > > > > 
> > > > > > > > > > > > The message "W \[MSGID: 108027\] 
> > > > > > > > > > > > \[afr-common.c:2718:afr\_discover\_done\]
> > > > > > > > > > > > 
> > > > > > > > > > > > 0-myvolume-replicate-0: no read subvols for (null)" 
> > > > > > > > > > > > repeated 34 times
> > > > > > > > > > > > 
> > > > > > > > > > > > between \[2018-02-13 08:16:14.023519\] and \[2018-02-13 
> > > > > > > > > > > > 08:17:26.560943\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \*\*\* /var/log/glusterfs/cli.log ***
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.064404\] E
> > > > > > > > > > > > 
> > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > 
> > > > > > > > > > > > quota limits for 3df709ee-641d-46a2-bd61-889583e3033c
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.074693\] E
> > > > > > > > > > > > 
> > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > 
> > > > > > > > > > > > quota limits for a27818fe-0248-40fe-bb23-d43d61010478
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.082067\] E
> > > > > > > > > > > > 
> > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > 
> > > > > > > > > > > > quota limits for daf97388-bcec-4cc0-a8ef-5b93f05b30f6
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.086929\] E
> > > > > > > > > > > > 
> > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > 
> > > > > > > > > > > > quota limits for 3c768b36-2625-4509-87ef-fe5214cb9b01
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.087905\] E
> > > > > > > > > > > > 
> > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > 
> > > > > > > > > > > > quota limits for f8cf47d4-4f54-43c5-ab0d-75b45b4677a3
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.089788\] E
> > > > > > > > > > > > 
> > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > 
> > > > > > > > > > > > quota limits for b4c81a39-2152-45c5-95d3-b796d88226fe
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.092919\] E
> > > > > > > > > > > > 
> > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > 
> > > > > > > > > > > > quota limits for 16ac4cde-a5d4-451f-adcc-422a542fea24
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.092980\] I 
> > > > > > > > > > > > \[input.c:31:cli_batch\] 0-: Exiting with: 0
> > > > > > > > > > > > 
> > > > > > > > > > > > \*\*\* 
> > > > > > > > > > > > /var/log/glusterfs/bricks/data-myvolume-brick.log ***
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948065\] I \[addr.c:182:gf_auth\] 
> > > > > > > > > > > > 0-/data/myvolume/brick:
> > > > > > > > > > > > 
> > > > > > > > > > > > allowed = "*", received addr = "127.0.0.1"
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948105\] I \[login.c:76:gf_auth\] 
> > > > > > > > > > > > 0-auth/login: allowed
> > > > > > > > > > > > 
> > > > > > > > > > > > user names: bea3e634-e174-4bb3-a1d6-25b09d03b536
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:13.948125\] I \[MSGID: 115029\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[server-handshake.c:695:server_setvolume\] 
> > > > > > > > > > > > 0-myvolume-server: accepted client
> > > > > > > > > > > > 
> > > > > > > > > > > > from 
> > > > > > > > > > > > gfs1a-14348-2018/02/13-08:16:09:933625-myvolume-client-0-0-0
> > > > > > > > > > > >  (version:
> > > > > > > > > > > > 
> > > > > > > > > > > > 3.10.7)
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.022257\] I \[MSGID: 115036\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[server.c:559:server\_rpc\_notify\] 0-myvolume-server: 
> > > > > > > > > > > > disconnecting connection
> > > > > > > > > > > > 
> > > > > > > > > > > > from 
> > > > > > > > > > > > gfs1a-14348-2018/02/13-08:16:09:933625-myvolume-client-0-0-0
> > > > > > > > > > > > 
> > > > > > > > > > > > \[2018-02-13 08:16:14.022465\] I \[MSGID: 101055\]
> > > > > > > > > > > > 
> > > > > > > > > > > > \[client\_t.c:436:gf\_client_unref\] 0-myvolume-server: 
> > > > > > > > > > > > Shutting down connection
> > > > > > > > > > > > 
> > > > > > > > > > > > gfs1a-14348-2018/02/13-08:16:09:933625-myvolume-client-0-0-0
> > > > > > > > > > > > 
> > > > > > > > > > > > \-\-\-\-\-\-\-\- Original Message --------
> > > > > > > > > > > > 
> > > > > > > > > > > > On February 13, 2018 12:47 AM, Hari Gowtham 
> > > > > > > > > > > > hgowt...@redhat.com wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > Hi,
> > > > > > > > > > > > 
> > > > > > > > > > > > Can you provide more information like, the volume 
> > > > > > > > > > > > configuration, quota.conf
> > > > > > > > > > > > 
> > > > > > > > > > > > file and the log files.
> > > > > > > > > > > > 
> > > > > > > > > > > > On Sat, Feb 10, 2018 at 1:05 AM, mabi 
> > > > > > > > > > > > m...@protonmail.ch wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > > Hello,
> > > > > > > > > > > > > 
> > > > > > > > > > > > > I am running GlusterFS 3.10.7 and just noticed by 
> > > > > > > > > > > > > doing a "gluster volume
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota <volname> list" that my quotas on that volume 
> > > > > > > > > > > > > are broken. The command
> > > > > > > > > > > > > 
> > > > > > > > > > > > > returns no output and no errors but by looking in 
> > > > > > > > > > > > > /var/log/glusterfs.cli I
> > > > > > > > > > > > > 
> > > > > > > > > > > > > found the following errors:
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.242324\] E
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota limits for 3df709ee-641d-46a2-bd61-889583e3033c
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.249790\] E
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota limits for a27818fe-0248-40fe-bb23-d43d61010478
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.252378\] E
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota limits for daf97388-bcec-4cc0-a8ef-5b93f05b30f6
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.256775\] E
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota limits for 3c768b36-2625-4509-87ef-fe5214cb9b01
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.257434\] E
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota limits for f8cf47d4-4f54-43c5-ab0d-75b45b4677a3
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.259126\] E
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota limits for b4c81a39-2152-45c5-95d3-b796d88226fe
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.261664\] E
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[cli-cmd-volume.c:1674:cli\_cmd\_quota\_handle\_list_all\]
> > > > > > > > > > > > >  0-cli: Failed to get
> > > > > > > > > > > > > 
> > > > > > > > > > > > > quota limits for 16ac4cde-a5d4-451f-adcc-422a542fea24
> > > > > > > > > > > > > 
> > > > > > > > > > > > > \[2018-02-09 19:31:24.261719\] I 
> > > > > > > > > > > > > \[input.c:31:cli_batch\] 0-: Exiting with: 0
> > > > > > > > > > > > > 
> > > > > > > > > > > > > How can I fix my quota on that volume again? I had 
> > > > > > > > > > > > > around 30 quotas set on
> > > > > > > > > > > > > 
> > > > > > > > > > > > > different directories of that volume.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Thanks in advance.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Regards,
> > > > > > > > > > > > > 
> > > > > > > > > > > > > M.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Gluster-users mailing list
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Gluster-users@gluster.org
> > > > > > > > > > > > > 
> > > > > > > > > > > > > http://lists.gluster.org/mailman/listinfo/gluster-users
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Regards,
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Hari Gowtham.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Regards,
> > > > > > > > > > > > > 
> > > > > > > > > > > > > Hari Gowtham.
> > > > > > > > > 
> > > > > > > > > Regards,
> > > > > > > > > 
> > > > > > > > > Hari Gowtham.
> > > > > > > 
> > > > > > > --
> > > > > > > 
> > > > > > > Regards,
> > > > > > > 
> > > > > > > Hari Gowtham.
> > > > > 
> > > > > Regards,
> > > > > 
> > > > > Hari Gowtham.
> > > 
> > > --
> > > 
> > > Regards,
> > > 
> > > Hari Gowtham.
> 
> --
> 
> Regards,
> 
> Hari Gowtham.


_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Reply via email to