Re: [Gluster-users] gluster volume permissions denied

2017-01-02 Thread vincent gromakowski
Did someone already get this issue ? Can it be kernel related ? I am using
a Centos 7.2 image from my cloud provider, could it be a low level
configuration that is missing ?

2016-12-29 18:19 GMT+01:00 vincent gromakowski <
vincent.gromakow...@gmail.com>:

> Hi all,
> Any idea regarding the log outputs ?
> What are the ACL to set on bricks directory or gluster brick root ?
>
> 2016-12-28 11:25 GMT+01:00 vincent gromakowski <
> vincent.gromakow...@gmail.com>:
>
>> Hi,
>> Please find below the outputs. I previse that I can read and write to the
>> volume but only with "sudo" or root account whatever the ACL or the
>> ownership I set on fuse directories (even 777)
>>
>> *>sudo gluster peer status*
>> *Number of Peers: 3*
>>
>> *Hostname: bd-reactive-worker-4*
>> *Uuid: 434a7ee0-9c83-47ce-9a02-7c89e2e94ce0*
>> *State: Peer in Cluster (Connected)*
>>
>> *Hostname: bd-reactive-worker-2*
>> *Uuid: 7f76389c-3f78-4cac-8fd8-56f0a9bff47a*
>> *State: Peer in Cluster (Connected)*
>>
>> *Hostname: bd-reactive-worker-3*
>> *Uuid: e412cae9-6ecd-49cf-be63-c46d3e537c83*
>> *State: Peer in Cluster (Connected)*
>>
>>
>> *>sudo gluster volume status*
>>
>>
>> *Status of volume: reactive_smallGluster process
>> TCP Port  RDMA Port  Online
>>  
>> Pid--Brick
>> bd-reactive-worker-1:/srv/gluster/data/small/brick1
>>  49155 0  Y   31517Brick
>> bd-reactive-worker-2:/srv/gluster/data/small/brick1
>>  49155 0  Y   1147Brick
>> bd-reactive-worker-3:/srv/gluster/data/small/brick1
>>  49155 0  Y   32455Brick
>> bd-reactive-worker-4:/srv/gluster/data/small/brick1
>>  49155 0  Y   675Brick
>> bd-reactive-worker-1:/srv/gluster/data/small/brick2
>>  49156 0  Y   31536Brick
>> bd-reactive-worker-2:/srv/gluster/data/small/brick2
>>  49156 0  Y   1167Brick
>> bd-reactive-worker-3:/srv/gluster/data/small/brick2
>>  49156 0  Y   32474Brick
>> bd-reactive-worker-4:/srv/gluster/data/small/brick2
>>  49156 0  Y   696Brick
>> bd-reactive-worker-1:/srv/gluster/data/small/brick3
>>  49157 0  Y   31555Brick
>> bd-reactive-worker-2:/srv/gluster/data/small/brick3
>>  49157 0  Y   1190Brick
>> bd-reactive-worker-3:/srv/gluster/data/small/brick3
>>  49157 0  Y   32493Brick
>> bd-reactive-worker-4:/srv/gluster/data/small/brick3
>>  49157 0  Y   715Self-heal Daemon on localhost
>>   N/A   N/AY   31575Self-heal Daemon on
>> bd-reactive-worker-4N/A   N/AY   736Self-heal Daemon on
>> bd-reactive-worker-3N/A   N/AY   32518Self-heal Daemon
>> on bd-reactive-worker-2N/A   N/AY   1227Task Status of
>> Volume
>> reactive_small--There
>> are no active volume tasks*
>>
>>
>> 2016-12-28 11:11 GMT+01:00 knarra :
>>
>>> On 12/28/2016 02:42 PM, vincent gromakowski wrote:
>>>
>>> Hi,
>>> Can someone help me solve this issue ? I am really stuck on it and I
>>> don't find any workaround...
>>> Thanks a lot.
>>>
>>> V
>>>
>>> Hi,
>>>
>>> What does gluster volume status show? I think it is because of
>>> quorum you are not able to read / write to and from the volume. Can you
>>> check if all your bricks are online and can you paste the output of your
>>> gluster peer status? In the glusterd.log i see that "*Peer
>>>  (<59500674-750f-4e16-aeea-4a99fd67218a>), in state
>>> , has disconnected from glusterd." *
>>>
>>> Thanks
>>> kasturi.
>>>
>>>
>>> 2016-12-26 15:02 GMT+01:00 vincent gromakowski <
>>> vincent.gromakow...@gmail.com>:
>>>
 Hi all,
 I am currently setting a gluster volume on 4 Centos 7.2 nodes.
 Everything seems to be OK from the volume creation to the fuse mounting but
 after that I can't access data (read or write) without a sudo even if I set
 777 permissions.
 I have checked that permissions on underlying FS (an XFS volume) are OK
 so I assume the problem is in Gluster but I can't find where.
 I am using ansible to deploy gluster, create volumes and mount fuse
 endpoint.
 Please find below some information:

 The line in /etc/fstab for mounting the raw device

 *LABEL=/gluster /srv/gluster/data xfs defaults 0 0 *

 The line in /etc/fstab for mounting the fuse endpoint
 *bd-reactive-worker-2:/reactive_small /srv/data/small glusterfs
 defaults,_netdev 0 0*

 *>sudo gluster volume info*






 * Volume Name: reactive_small Type: Distributed-Replicate Volume ID:
 f0abede2-eab3-4a0b-8271-ffd6f3c83eb6 Status: Started Snapshot Count: 0
 Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1:
 bd-reactive-worker-1:/srv/gluster/data/small/brick1 Brick2:
 bd-reactive-wo

Re: [Gluster-users] gluster volume permissions denied

2016-12-29 Thread vincent gromakowski
Hi all,
Any idea regarding the log outputs ?
What are the ACL to set on bricks directory or gluster brick root ?

2016-12-28 11:25 GMT+01:00 vincent gromakowski <
vincent.gromakow...@gmail.com>:

> Hi,
> Please find below the outputs. I previse that I can read and write to the
> volume but only with "sudo" or root account whatever the ACL or the
> ownership I set on fuse directories (even 777)
>
> *>sudo gluster peer status*
> *Number of Peers: 3*
>
> *Hostname: bd-reactive-worker-4*
> *Uuid: 434a7ee0-9c83-47ce-9a02-7c89e2e94ce0*
> *State: Peer in Cluster (Connected)*
>
> *Hostname: bd-reactive-worker-2*
> *Uuid: 7f76389c-3f78-4cac-8fd8-56f0a9bff47a*
> *State: Peer in Cluster (Connected)*
>
> *Hostname: bd-reactive-worker-3*
> *Uuid: e412cae9-6ecd-49cf-be63-c46d3e537c83*
> *State: Peer in Cluster (Connected)*
>
>
> *>sudo gluster volume status*
>
>
> *Status of volume: reactive_smallGluster process
>   TCP Port  RDMA Port  Online
>  
> Pid--Brick
> bd-reactive-worker-1:/srv/gluster/data/small/brick1
>  49155 0  Y   31517Brick
> bd-reactive-worker-2:/srv/gluster/data/small/brick1
>  49155 0  Y   1147Brick
> bd-reactive-worker-3:/srv/gluster/data/small/brick1
>  49155 0  Y   32455Brick
> bd-reactive-worker-4:/srv/gluster/data/small/brick1
>  49155 0  Y   675Brick
> bd-reactive-worker-1:/srv/gluster/data/small/brick2
>  49156 0  Y   31536Brick
> bd-reactive-worker-2:/srv/gluster/data/small/brick2
>  49156 0  Y   1167Brick
> bd-reactive-worker-3:/srv/gluster/data/small/brick2
>  49156 0  Y   32474Brick
> bd-reactive-worker-4:/srv/gluster/data/small/brick2
>  49156 0  Y   696Brick
> bd-reactive-worker-1:/srv/gluster/data/small/brick3
>  49157 0  Y   31555Brick
> bd-reactive-worker-2:/srv/gluster/data/small/brick3
>  49157 0  Y   1190Brick
> bd-reactive-worker-3:/srv/gluster/data/small/brick3
>  49157 0  Y   32493Brick
> bd-reactive-worker-4:/srv/gluster/data/small/brick3
>  49157 0  Y   715Self-heal Daemon on localhost
>   N/A   N/AY   31575Self-heal Daemon on
> bd-reactive-worker-4N/A   N/AY   736Self-heal Daemon on
> bd-reactive-worker-3N/A   N/AY   32518Self-heal Daemon
> on bd-reactive-worker-2N/A   N/AY   1227Task Status of
> Volume
> reactive_small--There
> are no active volume tasks*
>
>
> 2016-12-28 11:11 GMT+01:00 knarra :
>
>> On 12/28/2016 02:42 PM, vincent gromakowski wrote:
>>
>> Hi,
>> Can someone help me solve this issue ? I am really stuck on it and I
>> don't find any workaround...
>> Thanks a lot.
>>
>> V
>>
>> Hi,
>>
>> What does gluster volume status show? I think it is because of quorum
>> you are not able to read / write to and from the volume. Can you check if
>> all your bricks are online and can you paste the output of your gluster
>> peer status? In the glusterd.log i see that "*Peer
>>  (<59500674-750f-4e16-aeea-4a99fd67218a>), in state
>> , has disconnected from glusterd." *
>>
>> Thanks
>> kasturi.
>>
>>
>> 2016-12-26 15:02 GMT+01:00 vincent gromakowski <
>> vincent.gromakow...@gmail.com>:
>>
>>> Hi all,
>>> I am currently setting a gluster volume on 4 Centos 7.2 nodes.
>>> Everything seems to be OK from the volume creation to the fuse mounting but
>>> after that I can't access data (read or write) without a sudo even if I set
>>> 777 permissions.
>>> I have checked that permissions on underlying FS (an XFS volume) are OK
>>> so I assume the problem is in Gluster but I can't find where.
>>> I am using ansible to deploy gluster, create volumes and mount fuse
>>> endpoint.
>>> Please find below some information:
>>>
>>> The line in /etc/fstab for mounting the raw device
>>>
>>> *LABEL=/gluster /srv/gluster/data xfs defaults 0 0 *
>>>
>>> The line in /etc/fstab for mounting the fuse endpoint
>>> *bd-reactive-worker-2:/reactive_small /srv/data/small glusterfs
>>> defaults,_netdev 0 0*
>>>
>>> *>sudo gluster volume info*
>>>
>>>
>>>
>>>
>>>
>>>
>>> * Volume Name: reactive_small Type: Distributed-Replicate Volume ID:
>>> f0abede2-eab3-4a0b-8271-ffd6f3c83eb6 Status: Started Snapshot Count: 0
>>> Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1:
>>> bd-reactive-worker-1:/srv/gluster/data/small/brick1 Brick2:
>>> bd-reactive-worker-2:/srv/gluster/data/small/brick1 Brick3:
>>> bd-reactive-worker-3:/srv/gluster/data/small/brick1 Brick4:
>>> bd-reactive-worker-4:/srv/gluster/data/small/brick1 Brick5:
>>> bd-reactive-worker-1:/srv/gluster/data/small/brick2 Brick6:
>>> bd-reactive-worker-2:/srv/gluster/data/small/brick2 Brick7:
>>> bd-reactive-worker-3:/srv/gluster/data/small/brick2 Brick8:
>>> bd-reactive-worker-4

Re: [Gluster-users] gluster volume permissions denied

2016-12-28 Thread vincent gromakowski
Hi,
Please find below the outputs. I previse that I can read and write to the
volume but only with "sudo" or root account whatever the ACL or the
ownership I set on fuse directories (even 777)

*>sudo gluster peer status*
*Number of Peers: 3*

*Hostname: bd-reactive-worker-4*
*Uuid: 434a7ee0-9c83-47ce-9a02-7c89e2e94ce0*
*State: Peer in Cluster (Connected)*

*Hostname: bd-reactive-worker-2*
*Uuid: 7f76389c-3f78-4cac-8fd8-56f0a9bff47a*
*State: Peer in Cluster (Connected)*

*Hostname: bd-reactive-worker-3*
*Uuid: e412cae9-6ecd-49cf-be63-c46d3e537c83*
*State: Peer in Cluster (Connected)*


*>sudo gluster volume status*


*Status of volume: reactive_smallGluster process
  TCP Port  RDMA Port  Online
 
Pid--Brick
bd-reactive-worker-1:/srv/gluster/data/small/brick1
 49155 0  Y   31517Brick
bd-reactive-worker-2:/srv/gluster/data/small/brick1
 49155 0  Y   1147Brick
bd-reactive-worker-3:/srv/gluster/data/small/brick1
 49155 0  Y   32455Brick
bd-reactive-worker-4:/srv/gluster/data/small/brick1
 49155 0  Y   675Brick
bd-reactive-worker-1:/srv/gluster/data/small/brick2
 49156 0  Y   31536Brick
bd-reactive-worker-2:/srv/gluster/data/small/brick2
 49156 0  Y   1167Brick
bd-reactive-worker-3:/srv/gluster/data/small/brick2
 49156 0  Y   32474Brick
bd-reactive-worker-4:/srv/gluster/data/small/brick2
 49156 0  Y   696Brick
bd-reactive-worker-1:/srv/gluster/data/small/brick3
 49157 0  Y   31555Brick
bd-reactive-worker-2:/srv/gluster/data/small/brick3
 49157 0  Y   1190Brick
bd-reactive-worker-3:/srv/gluster/data/small/brick3
 49157 0  Y   32493Brick
bd-reactive-worker-4:/srv/gluster/data/small/brick3
 49157 0  Y   715Self-heal Daemon on localhost
  N/A   N/AY   31575Self-heal Daemon on
bd-reactive-worker-4N/A   N/AY   736Self-heal Daemon on
bd-reactive-worker-3N/A   N/AY   32518Self-heal Daemon
on bd-reactive-worker-2N/A   N/AY   1227Task Status of
Volume
reactive_small--There
are no active volume tasks*


2016-12-28 11:11 GMT+01:00 knarra :

> On 12/28/2016 02:42 PM, vincent gromakowski wrote:
>
> Hi,
> Can someone help me solve this issue ? I am really stuck on it and I don't
> find any workaround...
> Thanks a lot.
>
> V
>
> Hi,
>
> What does gluster volume status show? I think it is because of quorum
> you are not able to read / write to and from the volume. Can you check if
> all your bricks are online and can you paste the output of your gluster
> peer status? In the glusterd.log i see that "*Peer 
> (<59500674-750f-4e16-aeea-4a99fd67218a>), in state , has
> disconnected from glusterd." *
>
> Thanks
> kasturi.
>
>
> 2016-12-26 15:02 GMT+01:00 vincent gromakowski <
> vincent.gromakow...@gmail.com>:
>
>> Hi all,
>> I am currently setting a gluster volume on 4 Centos 7.2 nodes. Everything
>> seems to be OK from the volume creation to the fuse mounting but after that
>> I can't access data (read or write) without a sudo even if I set 777
>> permissions.
>> I have checked that permissions on underlying FS (an XFS volume) are OK
>> so I assume the problem is in Gluster but I can't find where.
>> I am using ansible to deploy gluster, create volumes and mount fuse
>> endpoint.
>> Please find below some information:
>>
>> The line in /etc/fstab for mounting the raw device
>>
>> *LABEL=/gluster /srv/gluster/data xfs defaults 0 0 *
>>
>> The line in /etc/fstab for mounting the fuse endpoint
>> *bd-reactive-worker-2:/reactive_small /srv/data/small glusterfs
>> defaults,_netdev 0 0*
>>
>> *>sudo gluster volume info*
>>
>>
>>
>>
>>
>>
>> * Volume Name: reactive_small Type: Distributed-Replicate Volume ID:
>> f0abede2-eab3-4a0b-8271-ffd6f3c83eb6 Status: Started Snapshot Count: 0
>> Number of Bricks: 4 x 3 = 12 Transport-type: tcp Bricks: Brick1:
>> bd-reactive-worker-1:/srv/gluster/data/small/brick1 Brick2:
>> bd-reactive-worker-2:/srv/gluster/data/small/brick1 Brick3:
>> bd-reactive-worker-3:/srv/gluster/data/small/brick1 Brick4:
>> bd-reactive-worker-4:/srv/gluster/data/small/brick1 Brick5:
>> bd-reactive-worker-1:/srv/gluster/data/small/brick2 Brick6:
>> bd-reactive-worker-2:/srv/gluster/data/small/brick2 Brick7:
>> bd-reactive-worker-3:/srv/gluster/data/small/brick2 Brick8:
>> bd-reactive-worker-4:/srv/gluster/data/small/brick2 Brick9:
>> bd-reactive-worker-1:/srv/gluster/data/small/brick3 Brick10:
>> bd-reactive-worker-2:/srv/gluster/data/small/brick3 Brick11:
>> bd-reactive-worker-3:/srv/gluster/data/small/brick3 Brick12:
>> bd-reactive-worker-4:/srv/gluster/data/small/brick3 Options Reconfigured:
>> nfs.disable: on performance.readdir-ahead: on transport.addre

Re: [Gluster-users] gluster volume permissions denied

2016-12-28 Thread knarra

On 12/28/2016 02:42 PM, vincent gromakowski wrote:

Hi,
Can someone help me solve this issue ? I am really stuck on it and I 
don't find any workaround...

Thanks a lot.

V

Hi,

What does gluster volume status show? I think it is because of 
quorum you are not able to read / write to and from the volume. Can you 
check if all your bricks are online and can you paste the output of your 
gluster peer status? In the glusterd.log i see that "/Peer 
 (<59500674-750f-4e16-aeea-4a99fd67218a>), in 
state , has disconnected from glusterd." /


Thanks
kasturi.


2016-12-26 15:02 GMT+01:00 vincent gromakowski 
mailto:vincent.gromakow...@gmail.com>>:


Hi all,
I am currently setting a gluster volume on 4 Centos 7.2 nodes.
Everything seems to be OK from the volume creation to the fuse
mounting but after that I can't access data (read or write)
without a sudo even if I set 777 permissions.
I have checked that permissions on underlying FS (an XFS volume)
are OK so I assume the problem is in Gluster but I can't find where.
I am using ansible to deploy gluster, create volumes and mount
fuse endpoint.
Please find below some information:

The line in /etc/fstab for mounting the raw device
/LABEL=/gluster /srv/gluster/data xfs defaults 0 0
/

The line in /etc/fstab for mounting the fuse endpoint
/bd-reactive-worker-2:/reactive_small /srv/data/small glusterfs
defaults,_netdev 0 0/
/
/
/>sudo gluster volume info/
/
Volume Name: reactive_small
Type: Distributed-Replicate
Volume ID: f0abede2-eab3-4a0b-8271-ffd6f3c83eb6
Status: Started
Snapshot Count: 0
Number of Bricks: 4 x 3 = 12
Transport-type: tcp
Bricks:
Brick1: bd-reactive-worker-1:/srv/gluster/data/small/brick1
Brick2: bd-reactive-worker-2:/srv/gluster/data/small/brick1
Brick3: bd-reactive-worker-3:/srv/gluster/data/small/brick1
Brick4: bd-reactive-worker-4:/srv/gluster/data/small/brick1
Brick5: bd-reactive-worker-1:/srv/gluster/data/small/brick2
Brick6: bd-reactive-worker-2:/srv/gluster/data/small/brick2
Brick7: bd-reactive-worker-3:/srv/gluster/data/small/brick2
Brick8: bd-reactive-worker-4:/srv/gluster/data/small/brick2
Brick9: bd-reactive-worker-1:/srv/gluster/data/small/brick3
Brick10: bd-reactive-worker-2:/srv/gluster/data/small/brick3
Brick11: bd-reactive-worker-3:/srv/gluster/data/small/brick3
Brick12: bd-reactive-worker-4:/srv/gluster/data/small/brick3
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
cluster.data-self-heal: off
cluster.entry-self-heal: off
cluster.metadata-self-heal: off
cluster.self-heal-daemon: off


>sudo cat /var/log/glusterfs/cli.log
[2016-12-26 13:41:11.422850] I [cli.c:730:main] 0-cli: Started
running gluster with version 3.8.5
[2016-12-26 13:41:11.428970] I
[cli-cmd-volume.c:1828:cli_check_gsync_present] 0-:
geo-replication not installed
[2016-12-26 13:41:11.429308] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-12-26 13:41:11.429360] I
[socket.c:2403:socket_event_handler] 0-transport: disconnecting now
[2016-12-26 13:41:11.430285] I
[socket.c:3391:socket_submit_request] 0-glusterfs: not connected
(priv->connected = 0)
[2016-12-26 13:41:11.430320] W [rpc-clnt.c:1640:rpc_clnt_submit]
0-glusterfs: failed to submit rpc-request (XID: 0x1 Program:
Gluster CLI, ProgVers: 2, Proc: 5) to rpc-transport (glusterfs)
[2016-12-26 13:41:24.967491] I [cli.c:730:main] 0-cli: Started
running gluster with version 3.8.5
[2016-12-26 13:41:24.972755] I
[cli-cmd-volume.c:1828:cli_check_gsync_present] 0-:
geo-replication not installed
[2016-12-26 13:41:24.973014] I [MSGID: 101190]
[event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started
thread with index 1
[2016-12-26 13:41:24.973080] I
[socket.c:2403:socket_event_handler] 0-transport: disconnecting now
[2016-12-26 13:41:24.973552] I
[cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 0-cli: Received resp to
get vol: 0
[2016-12-26 13:41:24.976419] I
[cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 0-cli: Received resp to
get vol: 0
[2016-12-26 13:41:24.976957] I
[cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 0-cli: Received resp to
get vol: 0
[2016-12-26 13:41:24.976985] I [input.c:31:cli_batch] 0-: Exiting
with: 0

>sudo cat /var/log/glusterfs/srv-data-small.log
[2016-12-26 13:46:53.407541] W [socket.c:590:__socket_rwv]
0-glusterfs: readv on 172.52.0.4:24007 
failed (No data available)
[2016-12-26 13:46:53.407997] E
[glusterfsd-mgmt.c:1902:mgmt_rpc_notify] 0-glusterfsd-mgmt: failed
to connect with remote-host: 172.52.0.4 (No data available)
[2016-12-26 13:46:53.408079] I
[glusterfsd-mgmt.c:1919:mgmt_rpc_notify] 0

Re: [Gluster-users] gluster volume permissions denied

2016-12-28 Thread vincent gromakowski
Hi,
Can someone help me solve this issue ? I am really stuck on it and I don't
find any workaround...
Thanks a lot.

V

2016-12-26 15:02 GMT+01:00 vincent gromakowski <
vincent.gromakow...@gmail.com>:

> Hi all,
> I am currently setting a gluster volume on 4 Centos 7.2 nodes. Everything
> seems to be OK from the volume creation to the fuse mounting but after that
> I can't access data (read or write) without a sudo even if I set 777
> permissions.
> I have checked that permissions on underlying FS (an XFS volume) are OK so
> I assume the problem is in Gluster but I can't find where.
> I am using ansible to deploy gluster, create volumes and mount fuse
> endpoint.
> Please find below some information:
>
> The line in /etc/fstab for mounting the raw device
>
> *LABEL=/gluster /srv/gluster/data xfs defaults 0 0*
>
> The line in /etc/fstab for mounting the fuse endpoint
> *bd-reactive-worker-2:/reactive_small /srv/data/small glusterfs
> defaults,_netdev 0 0*
>
> *>sudo gluster volume info*
>
>
>
>
>
>
> *Volume Name: reactive_smallType: Distributed-ReplicateVolume ID:
> f0abede2-eab3-4a0b-8271-ffd6f3c83eb6Status: StartedSnapshot Count: 0Number
> of Bricks: 4 x 3 = 12Transport-type: tcpBricks:Brick1:
> bd-reactive-worker-1:/srv/gluster/data/small/brick1Brick2:
> bd-reactive-worker-2:/srv/gluster/data/small/brick1Brick3:
> bd-reactive-worker-3:/srv/gluster/data/small/brick1Brick4:
> bd-reactive-worker-4:/srv/gluster/data/small/brick1Brick5:
> bd-reactive-worker-1:/srv/gluster/data/small/brick2Brick6:
> bd-reactive-worker-2:/srv/gluster/data/small/brick2Brick7:
> bd-reactive-worker-3:/srv/gluster/data/small/brick2Brick8:
> bd-reactive-worker-4:/srv/gluster/data/small/brick2Brick9:
> bd-reactive-worker-1:/srv/gluster/data/small/brick3Brick10:
> bd-reactive-worker-2:/srv/gluster/data/small/brick3Brick11:
> bd-reactive-worker-3:/srv/gluster/data/small/brick3Brick12:
> bd-reactive-worker-4:/srv/gluster/data/small/brick3Options
> Reconfigured:nfs.disable: onperformance.readdir-ahead:
> ontransport.address-family: inetcluster.data-self-heal:
> offcluster.entry-self-heal: offcluster.metadata-self-heal:
> offcluster.self-heal-daemon: off>sudo cat
> /var/log/glusterfs/cli.log[2016-12-26 13:41:11.422850] I [cli.c:730:main]
> 0-cli: Started running gluster with version 3.8.5[2016-12-26
> 13:41:11.428970] I [cli-cmd-volume.c:1828:cli_check_gsync_present] 0-:
> geo-replication not installed[2016-12-26 13:41:11.429308] I [MSGID: 101190]
> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1[2016-12-26 13:41:11.429360] I
> [socket.c:2403:socket_event_handler] 0-transport: disconnecting
> now[2016-12-26 13:41:11.430285] I [socket.c:3391:socket_submit_request]
> 0-glusterfs: not connected (priv->connected = 0)[2016-12-26
> 13:41:11.430320] W [rpc-clnt.c:1640:rpc_clnt_submit] 0-glusterfs: failed to
> submit rpc-request (XID: 0x1 Program: Gluster CLI, ProgVers: 2, Proc: 5) to
> rpc-transport (glusterfs)[2016-12-26 13:41:24.967491] I [cli.c:730:main]
> 0-cli: Started running gluster with version 3.8.5[2016-12-26
> 13:41:24.972755] I [cli-cmd-volume.c:1828:cli_check_gsync_present] 0-:
> geo-replication not installed[2016-12-26 13:41:24.973014] I [MSGID: 101190]
> [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread
> with index 1[2016-12-26 13:41:24.973080] I
> [socket.c:2403:socket_event_handler] 0-transport: disconnecting
> now[2016-12-26 13:41:24.973552] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
> 0-cli: Received resp to get vol: 0[2016-12-26 13:41:24.976419] I
> [cli-rpc-ops.c:817:gf_cli_get_volume_cbk] 0-cli: Received resp to get vol:
> 0[2016-12-26 13:41:24.976957] I [cli-rpc-ops.c:817:gf_cli_get_volume_cbk]
> 0-cli: Received resp to get vol: 0[2016-12-26 13:41:24.976985] I
> [input.c:31:cli_batch] 0-: Exiting with: 0>sudo cat
> /var/log/glusterfs/srv-data-small.log[2016-12-26 13:46:53.407541] W
> [socket.c:590:__socket_rwv] 0-glusterfs: readv on 172.52.0.4:24007
>  failed (No data available)[2016-12-26
> 13:46:53.407997] E [glusterfsd-mgmt.c:1902:mgmt_rpc_notify]
> 0-glusterfsd-mgmt: failed to connect with remote-host: 172.52.0.4 (No data
> available)[2016-12-26 13:46:53.408079] I
> [glusterfsd-mgmt.c:1919:mgmt_rpc_notify] 0-glusterfsd-mgmt: Exhausted all
> volfile servers[2016-12-26 13:46:54.736497] I
> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-3: changing
> port to 49155 (from 0)[2016-12-26 13:46:54.738710] I
> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-7: changing
> port to 49156 (from 0)[2016-12-26 13:46:54.738766] I
> [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-reactive_small-client-11: changing
> port to 49157 (from 0)[2016-12-26 13:46:54.742911] I [MSGID: 114057]
> [client-handshake.c:1446:select_server_supported_programs]
> 0-reactive_small-client-3: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)[2016-12-26 13:46:54.743199] I [MSGID: 114057]
> [client-handshake.c:1446:select_server_supported_program