[Gluster-users] Split brain directory

2018-01-24 Thread Luca Gervasi
Hello,
I'm trying to fix an issue with a Directory Split on a gluster 3.10.3. The
effect consist of a specific file in this splitted directory to randomly be
unavailable on some clients.
I have gathered all the informations on this gist:
https://gist.githubusercontent.com/lucagervasi/534e0024d349933eef44615fa8a5c374/raw/52ff8dd6a9cc8ba09b7f258aa85743d2854f9acc/splitinfo.txt

I discovered the splitted directory by the extended attributes (lines
172,173, 291,292,
trusted.afr.dirty=0x
trusted.afr.vol-video-client-13=0x
Seen on the bricks
* /bricks/video/brick3/safe/video.mysite.it/htdocs/ su glusterserver05
(lines 278 ro 294)
* /bricks/video/brick3/safe/video.mysite.it/htdocs/ su glusterserver03
(lines 159 to 175)

Reading the documentation about afr extended attributes, this situation
seems unclear (Docs from [1] and [2])
as own changelog is 0, same as client-13
(glusterserver02.mydomain.local:/bricks/video/brick3/safe)
as my understanding, such "dirty" attributes seems to indicate no split at
all (feel free to correct me).

Some days ago, I issued a "gluster volume heal vol-video full", which
endend (probably) that day, leaving no info on
/var/log/gluster/glustershd.log nor fixing this split.
I tried to trigger a self heal using "stat" and "ls -l" over the splitted
directory from a glusterfs mounted client directory, without having the bit
set cleared.
The volume heal info split-brain itself shows zero items to be healed
(lines 388 to 446).

All the clients mount this volume using glusterfs-fuse.

I don't know what to do, please help.

Thanks.

Luca Gervasi

References:
[1]
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
[2]
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html/administration_guide/sect-managing_split-brain
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Issue with duplicated files in gluster 3.10

2017-03-13 Thread Luca Gervasi
Hi and thanks for your time.
This setting is "off" by default. We enabled that some time last week after
the issue has been detected without any noticeable fix.
Is this setting willing to avoid the cration of dupes or to fix the
situation live?

Thanks
Luca & Andrea

On Mon, 13 Mar 2017 at 14:03 Ravishankar N  wrote:

> On 03/13/2017 03:08 PM, Luca Gervasi wrote:
>
> Hello Ravishankar,
> we had to change the directory as we fixed that one, so please check these
> links which refers to a new (broken) path:
>
> https://nopaste.me/view/1ee13a63 LS debug log
> https://nopaste.me/view/80ac1e13 getfattr
> https://nopaste.me/view/eafb0b44 volume status
>
> Thanks, could you check if setting performance.parallel-readdir to 'off'
> solves the issue? If yes, do you mind raising a bug and letting us know the
> BZ ID?
> Please note that parallel-readdir option is still experimental.
> -Ravi
>
>
> Thanks in advance.
>
> Luca & Andrea
>
>
> On Sat, 11 Mar 2017 at 02:01 Ravishankar N  wrote:
>
> On 03/10/2017 10:32 PM, Luca Gervasi wrote:
>
> Hi,
> I'm Andrea's collegue. I'd like to add that we have no trusted.afr xattr
> on the root folder
>
> Just to confirm, this would be 'includes2013' right?
>
> where those files are located and every file seems to be clean on each
> brick.
> You can find another example file's xattr here:
> https://nopaste.me/view/3c2014ac
> Here a listing: https://nopaste.me/view/eb4430a2
> This behavior causes the directory which contains those files undeletable
> (we had to clear them up on brick level, clearing all the hardlinks too).
> This issue is visible on fuse mounted volumes while it's not noticeable
> when mounted in NFS through ganesha.
>
> Could you provide the complete output of `gluster volume info`? I want to
> find out which bricks constitute a replica pair.
>
> Also could you change the diagnostics.client-log-level to DEBUG
> temporarily, do an `ls ` on the
> fuse mount  and share the corresponding mount log?
> Thanks,
> Ravi
>
> Thanks a lot.
>
> Luca Gervasi
>
>
>
> On Fri, 10 Mar 2017 at 17:41 Andrea Fogazzi  wrote:
>
> Hi community,
>
> we ran an extensive issue on our installation of gluster 3.10, which we
> did upgraded from 3.8.8 (it's a distribute+replicate, 5 nodes, 3 bricks in
> replica 2+1 quorum); recently we noticed a frequent issue where files get
> duplicated on the some of the directories; this is visible on the fuse
> mount points (RW), but not on the NFS/Ganesha (RO) mount points.
>
>
> A sample of an ll output:
>
>
> -T 1 48 web_rw 0 Mar 10 11:57 paginazione.shtml
> -rw-rw-r-- 1 48 web_rw   272 Feb 18 22:00 paginazione.shtml
>
> As you can see, the file is listed twice, but only one of the two is good
> (the name is identical, we verified that no spurious/hidden characters are
> present in the name); the issue maybe is related on how we uploaded the
> files on the file system, via incremental rsync on the fuse mount.
>
> Do anyone have suggestion on how it can happen, how to solve existing
> duplication or how to prevent to happen anymore.
>
> Thanks in advance.
> Best regards,
> andrea
>
> Options Reconfigured:
> performance.cache-invalidation: true
> cluster.favorite-child-policy: mtime
> features.cache-invalidation: 1
> network.inode-lru-limit: 9
> performance.cache-size: 1024MB
> storage.linux-aio: on
> nfs.outstanding-rpc-limit: 64
> storage.build-pgfid: on
> cluster.server-quorum-type: server
> cluster.self-heal-daemon: enable
> performance.nfs.io-cache: on
> performance.client-io-threads: on
> performance.nfs.stat-prefetch: on
> performance.nfs.io-threads: on
> diagnostics.latency-measurement: on
> diagnostics.count-fop-hits: on
> performance.md-cache-timeout: 1
> performance.io-thread-count: 16
> performance.high-prio-threads: 32
> performance.normal-prio-threads: 32
> performance.low-prio-threads: 32
> performance.least-prio-threads: 1
> nfs.acl: off
> nfs.rpc-auth-unix: off
> diagnostics.client-log-level: ERROR
> diagnostics.brick-log-level: ERROR
> cluster.lookup-unhashed: auto
> performance.nfs.quick-read: on
> performance.nfs.read-ahead: on
> cluster.quorum-type: auto
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> cluster.lookup-optimize: on
> cluster.readdir-optimize: on
> performance.read-ahead: off
> performance.write-behind-window-size: 1MB
> client.event-threads: 4
> server.event-threads: 16
> cluster.granular-entry-heal: enable
> performance.parallel-readdir: on
> cluster.server-quorum-ratio: 51
>
>
>
> Andrea Fogazzi
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Issue with duplicated files in gluster 3.10

2017-03-13 Thread Luca Gervasi
Hello Ravishankar,
we had to change the directory as we fixed that one, so please check these
links which refers to a new (broken) path:

https://nopaste.me/view/1ee13a63 LS debug log
https://nopaste.me/view/80ac1e13 getfattr
https://nopaste.me/view/eafb0b44 volume status

Thanks in advance.

Luca & Andrea


On Sat, 11 Mar 2017 at 02:01 Ravishankar N  wrote:

> On 03/10/2017 10:32 PM, Luca Gervasi wrote:
>
> Hi,
> I'm Andrea's collegue. I'd like to add that we have no trusted.afr xattr
> on the root folder
>
> Just to confirm, this would be 'includes2013' right?
>
> where those files are located and every file seems to be clean on each
> brick.
> You can find another example file's xattr here:
> https://nopaste.me/view/3c2014ac
> Here a listing: https://nopaste.me/view/eb4430a2
> This behavior causes the directory which contains those files undeletable
> (we had to clear them up on brick level, clearing all the hardlinks too).
> This issue is visible on fuse mounted volumes while it's not noticeable
> when mounted in NFS through ganesha.
>
> Could you provide the complete output of `gluster volume info`? I want to
> find out which bricks constitute a replica pair.
>
> Also could you change the diagnostics.client-log-level to DEBUG
> temporarily, do an `ls ` on the
> fuse mount  and share the corresponding mount log?
> Thanks,
> Ravi
>
> Thanks a lot.
>
> Luca Gervasi
>
>
>
> On Fri, 10 Mar 2017 at 17:41 Andrea Fogazzi  wrote:
>
> Hi community,
>
> we ran an extensive issue on our installation of gluster 3.10, which we
> did upgraded from 3.8.8 (it's a distribute+replicate, 5 nodes, 3 bricks in
> replica 2+1 quorum); recently we noticed a frequent issue where files get
> duplicated on the some of the directories; this is visible on the fuse
> mount points (RW), but not on the NFS/Ganesha (RO) mount points.
>
>
> A sample of an ll output:
>
>
> -T 1 48 web_rw 0 Mar 10 11:57 paginazione.shtml
> -rw-rw-r-- 1 48 web_rw   272 Feb 18 22:00 paginazione.shtml
>
> As you can see, the file is listed twice, but only one of the two is good
> (the name is identical, we verified that no spurious/hidden characters are
> present in the name); the issue maybe is related on how we uploaded the
> files on the file system, via incremental rsync on the fuse mount.
>
> Do anyone have suggestion on how it can happen, how to solve existing
> duplication or how to prevent to happen anymore.
>
> Thanks in advance.
> Best regards,
> andrea
>
> Options Reconfigured:
> performance.cache-invalidation: true
> cluster.favorite-child-policy: mtime
> features.cache-invalidation: 1
> network.inode-lru-limit: 9
> performance.cache-size: 1024MB
> storage.linux-aio: on
> nfs.outstanding-rpc-limit: 64
> storage.build-pgfid: on
> cluster.server-quorum-type: server
> cluster.self-heal-daemon: enable
> performance.nfs.io-cache: on
> performance.client-io-threads: on
> performance.nfs.stat-prefetch: on
> performance.nfs.io-threads: on
> diagnostics.latency-measurement: on
> diagnostics.count-fop-hits: on
> performance.md-cache-timeout: 1
> performance.io-thread-count: 16
> performance.high-prio-threads: 32
> performance.normal-prio-threads: 32
> performance.low-prio-threads: 32
> performance.least-prio-threads: 1
> nfs.acl: off
> nfs.rpc-auth-unix: off
> diagnostics.client-log-level: ERROR
> diagnostics.brick-log-level: ERROR
> cluster.lookup-unhashed: auto
> performance.nfs.quick-read: on
> performance.nfs.read-ahead: on
> cluster.quorum-type: auto
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> cluster.lookup-optimize: on
> cluster.readdir-optimize: on
> performance.read-ahead: off
> performance.write-behind-window-size: 1MB
> client.event-threads: 4
> server.event-threads: 16
> cluster.granular-entry-heal: enable
> performance.parallel-readdir: on
> cluster.server-quorum-ratio: 51
>
>
>
> Andrea Fogazzi
>
>
>
> ___
> Gluster-users mailing 
> listGluster-users@gluster.orghttp://lists.gluster.org/mailman/listinfo/gluster-users
>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Issue with duplicated files in gluster 3.10

2017-03-10 Thread Luca Gervasi
Hi,
I'm Andrea's collegue. I'd like to add that we have no trusted.afr xattr on
the root folder where those files are located and every file seems to be
clean on each brick.
You can find another example file's xattr here:
https://nopaste.me/view/3c2014ac
Here a listing: https://nopaste.me/view/eb4430a2
This behavior causes the directory which contains those files undeletable
(we had to clear them up on brick level, clearing all the hardlinks too).
This issue is visible on fuse mounted volumes while it's not noticeable
when mounted in NFS through ganesha.

Thanks a lot.

Luca Gervasi



On Fri, 10 Mar 2017 at 17:41 Andrea Fogazzi  wrote:

> Hi community,
>
> we ran an extensive issue on our installation of gluster 3.10, which we
> did upgraded from 3.8.8 (it's a distribute+replicate, 5 nodes, 3 bricks in
> replica 2+1 quorum); recently we noticed a frequent issue where files get
> duplicated on the some of the directories; this is visible on the fuse
> mount points (RW), but not on the NFS/Ganesha (RO) mount points.
>
>
> A sample of an ll output:
>
>
> -T 1 48 web_rw 0 Mar 10 11:57 paginazione.shtml
> -rw-rw-r-- 1 48 web_rw   272 Feb 18 22:00 paginazione.shtml
>
> As you can see, the file is listed twice, but only one of the two is good
> (the name is identical, we verified that no spurious/hidden characters are
> present in the name); the issue maybe is related on how we uploaded the
> files on the file system, via incremental rsync on the fuse mount.
>
> Do anyone have suggestion on how it can happen, how to solve existing
> duplication or how to prevent to happen anymore.
>
> Thanks in advance.
> Best regards,
> andrea
>
> Options Reconfigured:
> performance.cache-invalidation: true
> cluster.favorite-child-policy: mtime
> features.cache-invalidation: 1
> network.inode-lru-limit: 9
> performance.cache-size: 1024MB
> storage.linux-aio: on
> nfs.outstanding-rpc-limit: 64
> storage.build-pgfid: on
> cluster.server-quorum-type: server
> cluster.self-heal-daemon: enable
> performance.nfs.io-cache: on
> performance.client-io-threads: on
> performance.nfs.stat-prefetch: on
> performance.nfs.io-threads: on
> diagnostics.latency-measurement: on
> diagnostics.count-fop-hits: on
> performance.md-cache-timeout: 1
> performance.io-thread-count: 16
> performance.high-prio-threads: 32
> performance.normal-prio-threads: 32
> performance.low-prio-threads: 32
> performance.least-prio-threads: 1
> nfs.acl: off
> nfs.rpc-auth-unix: off
> diagnostics.client-log-level: ERROR
> diagnostics.brick-log-level: ERROR
> cluster.lookup-unhashed: auto
> performance.nfs.quick-read: on
> performance.nfs.read-ahead: on
> cluster.quorum-type: auto
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full
> transport.address-family: inet
> performance.readdir-ahead: on
> nfs.disable: on
> cluster.lookup-optimize: on
> cluster.readdir-optimize: on
> performance.read-ahead: off
> performance.write-behind-window-size: 1MB
> client.event-threads: 4
> server.event-threads: 16
> cluster.granular-entry-heal: enable
> performance.parallel-readdir: on
> cluster.server-quorum-ratio: 51
>
>
>
> Andrea Fogazzi
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Glusterd seems to be ignoring that the underling filesystem got missing

2016-09-26 Thread Luca Gervasi
Hi guys,
I've got a strange problem involving this timeline (matches the "Log
fragment 1" excerpt)
19:56:50: disk is detached from my system. This disk is actually the brick
of the volume V.
19:56:50: LVM sees the disk as unreachable and starts its maintenance
procedures
19:56:50: LVM umounts my thin provisioned volumes
19:57:02: Health check on specific bricks fails thus moving the brick to a
down state
19:57:32: XFS filesystem umounts

At this point, the brick filesystem is no longer mounted. The underlying
filesystems is empty (misses the brick directory too). My assumption is
that gluster would stop itself in such conditions: it is not.
Gluster slowly fills my entire root partition, creating its full tree.

My only warning point is the disk that starts to fill its inodes to 100%.

I've read release notes for every version subsequent mine (3.7.14, 3.7.15)
without finding relevant fixes and at this point i'm pretty sure is some
bug undocumented.
Servers were made symmetric.

Could you please help me understand how to avoid that gluster coninues
write on an unmounted filesystem? Thanks.

I'm running a 3 node replica on 3 azure vms. This is the configuration:

MD (yes, i use md to aggregate 4 disks into a single 4Tb volume):
/dev/md128:
Version : 1.2
  Creation Time : Mon Aug 29 18:10:45 2016
 Raid Level : raid0
 Array Size : 4290248704 (4091.50 GiB 4393.21 GB)
   Raid Devices : 4
  Total Devices : 4
Persistence : Superblock is persistent

Update Time : Mon Aug 29 18:10:45 2016
  State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

 Chunk Size : 512K

   Name : 128
   UUID : d5c51214:43e48da9:49086616:c1371514
 Events : 0

Number   Major   Minor   RaidDevice State
   0   8   800  active sync   /dev/sdf
   1   8   961  active sync   /dev/sdg
   2   8  1122  active sync   /dev/sdh
   3   8  1283  active sync   /dev/sdi

PV, VG, LV status
  PV VG  Fmt  Attr PSize PFree DevSize PV UUID

  /dev/md127 VGdata  lvm2 a--  2.00t 2.00t   2.00t
Kxb6C0-FLIH-4rB1-DKyf-IQuR-bbPE-jm2mu0
  /dev/md128 gluster lvm2 a--  4.00t 1.07t   4.00t
lDazuw-zBPf-Duis-ZDg1-3zfg-53Ba-2ZF34m

 VG  Attr   Ext   #PV #LV #SN VSize VFree VG UUID
 VProfile
  VGdata  wz--n- 4.00m   1   0   0 2.00t 2.00t
XI2V2X-hdxU-0Jrn-TN7f-GSEk-7aNs-GCdTtn
  gluster wz--n- 4.00m   1   6   0 4.00t 1.07t
ztxX4f-vTgN-IKop-XePU-OwqW-T9k6-A6uDk0

 LV  VG  #Seg Attr   LSize   Maj Min KMaj KMin Pool
Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID
   LProfile
  apps-data   gluster1 Vwi-aotz--  50.00g  -1  -1  253   12
thinpool0.08
 znUMbm-ax1N-R7aj-dxLc-gtif-WOvk-9QC8tq
  feedgluster1 Vwi-aotz-- 100.00g  -1  -1  253   14
thinpool0.08
 hZ4Isk-dELG-lgFs-2hJ6-aYid-8VKg-3jJko9
  homes   gluster1 Vwi-aotz--   1.46t  -1  -1  253   11
thinpool58.58
salIPF-XvsA-kMnm-etjf-Uaqy-2vA9-9WHPkH
  search-data gluster1 Vwi-aotz-- 100.00g  -1  -1  253   13
thinpool16.41
Z5hoa3-yI8D-dk5Q-2jWH-N5R2-ge09-RSjPpQ
  thinpoolgluster1 twi-aotz--   2.93t  -1  -1  2539
29.85  60.00
 oHTbgW-tiPh-yDfj-dNOm-vqsF-fBNH-o1izx2
  video-asset-manager gluster1 Vwi-aotz-- 100.00g  -1  -1  253   15
thinpool0.07
 4dOXga-96Wa-u3mh-HMmE-iX1I-o7ov-dtJ8lZ

Gluster volume configuration (all volumes use the same exact configuration,
listing them all would be redundant)
Volume Name: vol-homes
Type: Replicate
Volume ID: 0c8fa62e-dd7e-429c-a19a-479404b5e9c6
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glu01.prd.azr:/bricks/vol-homes/brick1
Brick2: glu02.prd.azr:/bricks/vol-homes/brick1
Brick3: glu03.prd.azr:/bricks/vol-homes/brick1
Options Reconfigured:
performance.readdir-ahead: on
cluster.server-quorum-type: server
nfs.disable: disable
cluster.lookup-unhashed: auto
performance.nfs.quick-read: on
performance.nfs.read-ahead: on
performance.cache-size: 4096MB
cluster.self-heal-daemon: enable
diagnostics.brick-log-level: ERROR
diagnostics.client-log-level: ERROR
nfs.rpc-auth-unix: off
nfs.acl: off
performance.nfs.io-cache: on
performance.client-io-threads: on
performance.nfs.stat-prefetch: on
performance.nfs.io-threads: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
performance.md-cache-timeout: 1
performance.cache-refresh-timeout: 1
performance.io-thread-count: 16
performance.high-prio-threads: 16
performance.normal-prio-threads: 16
performance.low-prio-threads: 16
performance.least-prio-threads: 1
cluster.server-quorum-ratio: 60

fstab:
/dev/gluster/homes  /bricks/vol-homes
xfs defaults,noatime,nobarrier,nofail 0 2

Software:
CentOS Linux release 7.1.1503 (Core)
glusterfs-api-3.7.13-1.el7.x86_64
glusterfs-libs-3.7.13-1.el

[Gluster-users] Glusterd seems to be ignoring that the underling filesystem got missing

2016-09-23 Thread Luca Gervasi
Hi guys,
I've got a strange problem involving this timeline (matches the "Log
fragment 1" excerpt)
19:56:50: disk is detached from my system. This disk is actually the brick
of the volume V.
19:56:50: LVM sees the disk as unreachable and starts its maintenance
procedures
19:56:50: LVM umounts my thin provisioned volumes
19:57:02: Health check on specific bricks fails thus moving the brick to a
down state
19:57:32: XFS filesystem umounts

At this point, the brick filesystem is no longer mounted. The underlying
filesystems is empty (misses the brick directory too). My assumption is
that gluster would stop itself in such conditions: it is not.
Gluster slowly fills my entire root partition, creating its full tree.

My only warning point is the disk that starts to fill its inodes to 100%.

I've read release notes for every version subsequent mine (3.7.14, 3.7.15)
without finding relevant fixes and at this point i'm pretty sure is some
bug undocumented.
Servers were made symmetric.

Could you please help me understand how to avoid that gluster coninues
write on an unmounted filesystem? Thanks.

I'm running a 3 node replica on 3 azure vms. This is the configuration:

MD (yes, i use md to aggregate 4 disks into a single 4Tb volume):
/dev/md128:
Version : 1.2
  Creation Time : Mon Aug 29 18:10:45 2016
 Raid Level : raid0
 Array Size : 4290248704 (4091.50 GiB 4393.21 GB)
   Raid Devices : 4
  Total Devices : 4
Persistence : Superblock is persistent

Update Time : Mon Aug 29 18:10:45 2016
  State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

 Chunk Size : 512K

   Name : 128
   UUID : d5c51214:43e48da9:49086616:c1371514
 Events : 0

Number   Major   Minor   RaidDevice State
   0   8   800  active sync   /dev/sdf
   1   8   961  active sync   /dev/sdg
   2   8  1122  active sync   /dev/sdh
   3   8  1283  active sync   /dev/sdi

PV, VG, LV status
  PV VG  Fmt  Attr PSize PFree DevSize PV UUID

  /dev/md127 VGdata  lvm2 a--  2.00t 2.00t   2.00t
Kxb6C0-FLIH-4rB1-DKyf-IQuR-bbPE-jm2mu0
  /dev/md128 gluster lvm2 a--  4.00t 1.07t   4.00t
lDazuw-zBPf-Duis-ZDg1-3zfg-53Ba-2ZF34m

 VG  Attr   Ext   #PV #LV #SN VSize VFree VG UUID
 VProfile
  VGdata  wz--n- 4.00m   1   0   0 2.00t 2.00t
XI2V2X-hdxU-0Jrn-TN7f-GSEk-7aNs-GCdTtn
  gluster wz--n- 4.00m   1   6   0 4.00t 1.07t
ztxX4f-vTgN-IKop-XePU-OwqW-T9k6-A6uDk0

 LV  VG  #Seg Attr   LSize   Maj Min KMaj KMin Pool
Origin Data%  Meta%  Move Cpy%Sync Log Convert LV UUID
   LProfile
  apps-data   gluster1 Vwi-aotz--  50.00g  -1  -1  253   12
thinpool0.08
 znUMbm-ax1N-R7aj-dxLc-gtif-WOvk-9QC8tq
  feedgluster1 Vwi-aotz-- 100.00g  -1  -1  253   14
thinpool0.08
 hZ4Isk-dELG-lgFs-2hJ6-aYid-8VKg-3jJko9
  homes   gluster1 Vwi-aotz--   1.46t  -1  -1  253   11
thinpool58.58
salIPF-XvsA-kMnm-etjf-Uaqy-2vA9-9WHPkH
  search-data gluster1 Vwi-aotz-- 100.00g  -1  -1  253   13
thinpool16.41
Z5hoa3-yI8D-dk5Q-2jWH-N5R2-ge09-RSjPpQ
  thinpoolgluster1 twi-aotz--   2.93t  -1  -1  2539
29.85  60.00
 oHTbgW-tiPh-yDfj-dNOm-vqsF-fBNH-o1izx2
  video-asset-manager gluster1 Vwi-aotz-- 100.00g  -1  -1  253   15
thinpool0.07
 4dOXga-96Wa-u3mh-HMmE-iX1I-o7ov-dtJ8lZ

Gluster volume configuration (all volumes use the same exact configuration,
listing them all would be redundant)
Volume Name: vol-homes
Type: Replicate
Volume ID: 0c8fa62e-dd7e-429c-a19a-479404b5e9c6
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glu01.prd.azr:/bricks/vol-homes/brick1
Brick2: glu02.prd.azr:/bricks/vol-homes/brick1
Brick3: glu03.prd.azr:/bricks/vol-homes/brick1
Options Reconfigured:
performance.readdir-ahead: on
cluster.server-quorum-type: server
nfs.disable: disable
cluster.lookup-unhashed: auto
performance.nfs.quick-read: on
performance.nfs.read-ahead: on
performance.cache-size: 4096MB
cluster.self-heal-daemon: enable
diagnostics.brick-log-level: ERROR
diagnostics.client-log-level: ERROR
nfs.rpc-auth-unix: off
nfs.acl: off
performance.nfs.io-cache: on
performance.client-io-threads: on
performance.nfs.stat-prefetch: on
performance.nfs.io-threads: on
diagnostics.latency-measurement: on
diagnostics.count-fop-hits: on
performance.md-cache-timeout: 1
performance.cache-refresh-timeout: 1
performance.io-thread-count: 16
performance.high-prio-threads: 16
performance.normal-prio-threads: 16
performance.low-prio-threads: 16
performance.least-prio-threads: 1
cluster.server-quorum-ratio: 60

fstab:
/dev/gluster/homes  /bricks/vol-homes
xfs defaults,noatime,nobarrier,nofail 0 2

Software:
CentOS Linux release 7.1.1503 (Core)
glusterfs-api-3.7.13-1.el7.x86_64
glusterfs-libs-3.7.13-1.el