[Gluster-users] gluster volume create failed: Host is not in 'Peer in Cluster' state

2018-05-22 Thread Brian Andrus

All,

Running glusterfs-4.0.2-1 on CentOS 7.5.1804

I have 10 servers running in a pool. All show as connected when I do 
gluster peer status and gluster pool list.


There is 1 volume running that is distributed on servers 1-5.

I try using a brick in server7 and it always gives me:
/volume create: GDATA: failed: Host server7 is not in 'Peer in Cluster' 
state/


Now that is even ON server7 with:
/gluster volume create GDATA transport tcp server7:/GLUSTER/brick1/

I have detached and re-probed the server. It seems all happy, but it 
will NOT allow any sort of volume to be created on it.


Any ideas out there?

Brian Andrus

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] replicate a distributed volume

2018-05-22 Thread Thing
I would like to know how if so.  I tried with 4 nodes and couldnt make it
work.  I think I need groups of 3 so 6 or 9 nodes?  dunno docs are so vague.



On 23 May 2018 at 13:12, Brian Andrus  wrote:

> All,
>
> With Gluster 4.0.2, is it possible to take an existing distributed volume
> and turn it into a distributed-replicate by adding servers/bricks?
>
> It seems this should be possible, but I don't know that anything has been
> done to get it there.
>
> Brian Andrus
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] replicate a distributed volume

2018-05-22 Thread Brian Andrus

All,

With Gluster 4.0.2, is it possible to take an existing distributed 
volume and turn it into a distributed-replicate by adding servers/bricks?


It seems this should be possible, but I don't know that anything has 
been done to get it there.


Brian Andrus


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] split brain? but where?

2018-05-22 Thread Thing
Next I ran a test and your find worksI am wondering if I can simply
delete this GFID?

8><-
[root@glusterp2 fb]# find /bricks/brick1/gv0/ -samefile
/bricks/brick1/gv0/.glusterfs/74/75/7475bd15-05a6-45c2-b395-bc9fd3d1763f
/bricks/brick1/gv0/.glusterfs/74/75/7475bd15-05a6-45c2-b395-bc9fd3d1763f
/bricks/brick1/gv0/dell6430-001/virtual-machines-backups/21-3-2018/images/debian9-002.qcow2
[root@glusterp2 fb]#
8><-

On another node also no return,

8><
[root@glusterp1 ~]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root@glusterp1 ~]#
8><

and on the final node,
8><---
[root@glusterp3 ~]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root@glusterp3 ~]#
8><---

So none of the 3 nodes has an actual file?



On 23 May 2018 at 08:35, Thing  wrote:

> I tried looking for a file of the same size and the gfid doesnt show up,
>
> 8><---
> [root@glusterp2 fb]# pwd
> /bricks/brick1/gv0/.glusterfs/ea/fb
> [root@glusterp2 fb]# ls -al
> total 3130892
> drwx--. 2 root root 64 May 22 13:01 .
> drwx--. 4 root root 24 May  8 14:27 ..
> -rw---. 1 root root 3294887936 May  4 11:07 eafb8799-4e7a-4264-9213-
> 26997c5a4693
> -rw-r--r--. 1 root root   1396 May 22 13:01 gfid.run
>
> so the gfid seems largebut du cant see it...
>
> [root@glusterp2 fb]# du -a /bricks/brick1/gv0 | sort -n -r | head -n 10
> 275411712 /bricks/brick1/gv0
> 275411696 /bricks/brick1/gv0/.glusterfs
> 22484988 /bricks/brick1/gv0/.glusterfs/57
> 20974980 /bricks/brick1/gv0/.glusterfs/a5
> 20974976 /bricks/brick1/gv0/.glusterfs/d5/99/d5991797-848d-4d60-b9dc-
> d31174f63f72
> 20974976 /bricks/brick1/gv0/.glusterfs/d5/99
> 20974976 /bricks/brick1/gv0/.glusterfs/d5
> 20974976 /bricks/brick1/gv0/.glusterfs/a5/27/a5279083-4d24-4dc8-be2d-
> 4f507c5372cf
> 20974976 /bricks/brick1/gv0/.glusterfs/a5/27
> 20974976 /bricks/brick1/gv0/.glusterfs/74/75/7475bd15-05a6-45c2-b395-
> bc9fd3d1763f
> [root@glusterp2 fb]#
>
> 8><---
>
>
>
> On 23 May 2018 at 08:29, Thing  wrote:
>
>> I tried this already.
>>
>> 8><---
>> [root@glusterp2 fb]# find /bricks/brick1/gv0 -samefile
>> /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
>> /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
>> [root@glusterp2 fb]#
>> 8><---
>>
>> gluster 4
>> Centos 7.4
>>
>> 8><---
>> df -h
>> [root@glusterp2 fb]# df -h
>> Filesystem   Size  Used Avail
>> Use% Mounted on
>> /dev/mapper/centos-root   19G  3.4G
>>  16G  19% /
>> devtmpfs 3.8G 0
>> 3.8G   0% /dev
>> tmpfs3.8G   12K
>> 3.8G   1% /dev/shm
>> tmpfs3.8G  9.0M
>> 3.8G   1% /run
>> tmpfs3.8G 0
>> 3.8G   0% /sys/fs/cgroup
>> /dev/mapper/centos-data1 112G   33M
>> 112G   1% /data1
>> /dev/mapper/centos-var19G  219M
>>  19G   2% /var
>> /dev/mapper/centos-home   47G   38M
>>  47G   1% /home
>> /dev/mapper/centos-var_lib   9.4G  178M
>> 9.2G   2% /var/lib
>> /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2  932G  263G
>> 669G  29% /bricks/brick1
>> /dev/sda1950M  235M
>> 715M  25% /boot
>> 8><---
>>
>>
>>
>> So the output isnt helping..
>>
>>
>>
>>
>>
>>
>>
>> On 23 May 2018 at 00:29, Karthik Subrahmanya  wrote:
>>
>>> Hi,
>>>
>>> Which version of gluster you are using?
>>>
>>> You can find which file is that using the following command
>>> find  -samefile >> gfid>//
>>>
>>> Please provide the getfatr output of the file which is in split brain.
>>> The steps to recover from split-brain can be found here,
>>> http://gluster.readthedocs.io/en/latest/Troubleshooting/reso
>>> lving-splitbrain/
>>>
>>> HTH,
>>> Karthik
>>>
>>> On Tue, May 22, 2018 at 4:03 AM, Joe Julian 
>>> wrote:
>>>
 How do I find what  "eafb8799-4e7a-4264-9213-26997c5a4693"  is?

 https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/


 On May 21, 2018 3:22:01 PM PDT, Thing  wrote:
 >Hi,
 >
 >I seem to have a split brain issue, but I cannot figure out where this
 >is
 >and what it is, can someone help me pls,  I cant find what to fix here.
 >
 >==
 >root@salt-001:~# salt gluster* cmd.run 'df -h'
 >glusterp2.graywitch.co.nz:
 >Filesystem  

Re: [Gluster-users] split brain? but where?

2018-05-22 Thread Thing
I tried looking for a file of the same size and the gfid doesnt show up,

8><---
[root@glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root@glusterp2 fb]# ls -al
total 3130892
drwx--. 2 root root 64 May 22 13:01 .
drwx--. 4 root root 24 May  8 14:27 ..
-rw---. 1 root root 3294887936 May  4 11:07
eafb8799-4e7a-4264-9213-26997c5a4693
-rw-r--r--. 1 root root   1396 May 22 13:01 gfid.run

so the gfid seems largebut du cant see it...

[root@glusterp2 fb]# du -a /bricks/brick1/gv0 | sort -n -r | head -n 10
275411712 /bricks/brick1/gv0
275411696 /bricks/brick1/gv0/.glusterfs
22484988 /bricks/brick1/gv0/.glusterfs/57
20974980 /bricks/brick1/gv0/.glusterfs/a5
20974976
/bricks/brick1/gv0/.glusterfs/d5/99/d5991797-848d-4d60-b9dc-d31174f63f72
20974976 /bricks/brick1/gv0/.glusterfs/d5/99
20974976 /bricks/brick1/gv0/.glusterfs/d5
20974976
/bricks/brick1/gv0/.glusterfs/a5/27/a5279083-4d24-4dc8-be2d-4f507c5372cf
20974976 /bricks/brick1/gv0/.glusterfs/a5/27
20974976
/bricks/brick1/gv0/.glusterfs/74/75/7475bd15-05a6-45c2-b395-bc9fd3d1763f
[root@glusterp2 fb]#

8><---



On 23 May 2018 at 08:29, Thing  wrote:

> I tried this already.
>
> 8><---
> [root@glusterp2 fb]# find /bricks/brick1/gv0 -samefile
> /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
> /bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
> [root@glusterp2 fb]#
> 8><---
>
> gluster 4
> Centos 7.4
>
> 8><---
> df -h
> [root@glusterp2 fb]# df -h
> Filesystem   Size  Used Avail
> Use% Mounted on
> /dev/mapper/centos-root   19G  3.4G   16G
> 19% /
> devtmpfs 3.8G 0  3.8G
>  0% /dev
> tmpfs3.8G   12K  3.8G
>  1% /dev/shm
> tmpfs3.8G  9.0M  3.8G
>  1% /run
> tmpfs3.8G 0  3.8G
>  0% /sys/fs/cgroup
> /dev/mapper/centos-data1 112G   33M  112G
>  1% /data1
> /dev/mapper/centos-var19G  219M   19G
>  2% /var
> /dev/mapper/centos-home   47G   38M   47G
>  1% /home
> /dev/mapper/centos-var_lib   9.4G  178M  9.2G
>  2% /var/lib
> /dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2  932G  263G
> 669G  29% /bricks/brick1
> /dev/sda1950M  235M  715M
> 25% /boot
> 8><---
>
>
>
> So the output isnt helping..
>
>
>
>
>
>
>
> On 23 May 2018 at 00:29, Karthik Subrahmanya  wrote:
>
>> Hi,
>>
>> Which version of gluster you are using?
>>
>> You can find which file is that using the following command
>> find  -samefile > gfid>//
>>
>> Please provide the getfatr output of the file which is in split brain.
>> The steps to recover from split-brain can be found here,
>> http://gluster.readthedocs.io/en/latest/Troubleshooting/reso
>> lving-splitbrain/
>>
>> HTH,
>> Karthik
>>
>> On Tue, May 22, 2018 at 4:03 AM, Joe Julian  wrote:
>>
>>> How do I find what  "eafb8799-4e7a-4264-9213-26997c5a4693"  is?
>>>
>>> https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
>>>
>>>
>>> On May 21, 2018 3:22:01 PM PDT, Thing  wrote:
>>> >Hi,
>>> >
>>> >I seem to have a split brain issue, but I cannot figure out where this
>>> >is
>>> >and what it is, can someone help me pls,  I cant find what to fix here.
>>> >
>>> >==
>>> >root@salt-001:~# salt gluster* cmd.run 'df -h'
>>> >glusterp2.graywitch.co.nz:
>>> >Filesystem   Size  Used
>>> >Avail Use% Mounted on
>>> >/dev/mapper/centos-root   19G  3.4G
>>> > 16G  19% /
>>> >devtmpfs 3.8G 0
>>> >3.8G   0% /dev
>>> >tmpfs3.8G   12K
>>> >3.8G   1% /dev/shm
>>> >tmpfs3.8G  9.1M
>>> >3.8G   1% /run
>>> >tmpfs3.8G 0
>>> >3.8G   0% /sys/fs/cgroup
>>> >/dev/mapper/centos-tmp   3.8G   33M
>>> >3.7G   1% /tmp
>>> >/dev/mapper/centos-var19G  213M
>>> > 19G   2% /var
>>> >/dev/mapper/centos-home   47G   38M
>>> > 47G   1% /home
>>> >/dev/mapper/centos-data1 112G   33M
>>> >112G   1% /data1
>>> >/dev/mapper/centos-var_lib   9.4G  178M
>>> >9.2G   2% /var/lib
>>> >/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2  932G  264G
>>> >668G  29% /bricks/brick1
>>> >/dev/sda1950M  

Re: [Gluster-users] split brain? but where?

2018-05-22 Thread Thing
I tried this already.

8><---
[root@glusterp2 fb]# find /bricks/brick1/gv0 -samefile
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
/bricks/brick1/gv0/.glusterfs/ea/fb/eafb8799-4e7a-4264-9213-26997c5a4693
[root@glusterp2 fb]#
8><---

gluster 4
Centos 7.4

8><---
df -h
[root@glusterp2 fb]# df -h
Filesystem   Size  Used Avail
Use% Mounted on
/dev/mapper/centos-root   19G  3.4G   16G
19% /
devtmpfs 3.8G 0  3.8G
 0% /dev
tmpfs3.8G   12K  3.8G
 1% /dev/shm
tmpfs3.8G  9.0M  3.8G
 1% /run
tmpfs3.8G 0  3.8G
 0% /sys/fs/cgroup
/dev/mapper/centos-data1 112G   33M  112G
 1% /data1
/dev/mapper/centos-var19G  219M   19G
 2% /var
/dev/mapper/centos-home   47G   38M   47G
 1% /home
/dev/mapper/centos-var_lib   9.4G  178M  9.2G
 2% /var/lib
/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2  932G  263G  669G
29% /bricks/brick1
/dev/sda1950M  235M  715M
25% /boot
8><---



So the output isnt helping..







On 23 May 2018 at 00:29, Karthik Subrahmanya  wrote:

> Hi,
>
> Which version of gluster you are using?
>
> You can find which file is that using the following command
> find  -samefile  gfid>//
>
> Please provide the getfatr output of the file which is in split brain.
> The steps to recover from split-brain can be found here,
> http://gluster.readthedocs.io/en/latest/Troubleshooting/
> resolving-splitbrain/
>
> HTH,
> Karthik
>
> On Tue, May 22, 2018 at 4:03 AM, Joe Julian  wrote:
>
>> How do I find what  "eafb8799-4e7a-4264-9213-26997c5a4693"  is?
>>
>> https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
>>
>>
>> On May 21, 2018 3:22:01 PM PDT, Thing  wrote:
>> >Hi,
>> >
>> >I seem to have a split brain issue, but I cannot figure out where this
>> >is
>> >and what it is, can someone help me pls,  I cant find what to fix here.
>> >
>> >==
>> >root@salt-001:~# salt gluster* cmd.run 'df -h'
>> >glusterp2.graywitch.co.nz:
>> >Filesystem   Size  Used
>> >Avail Use% Mounted on
>> >/dev/mapper/centos-root   19G  3.4G
>> > 16G  19% /
>> >devtmpfs 3.8G 0
>> >3.8G   0% /dev
>> >tmpfs3.8G   12K
>> >3.8G   1% /dev/shm
>> >tmpfs3.8G  9.1M
>> >3.8G   1% /run
>> >tmpfs3.8G 0
>> >3.8G   0% /sys/fs/cgroup
>> >/dev/mapper/centos-tmp   3.8G   33M
>> >3.7G   1% /tmp
>> >/dev/mapper/centos-var19G  213M
>> > 19G   2% /var
>> >/dev/mapper/centos-home   47G   38M
>> > 47G   1% /home
>> >/dev/mapper/centos-data1 112G   33M
>> >112G   1% /data1
>> >/dev/mapper/centos-var_lib   9.4G  178M
>> >9.2G   2% /var/lib
>> >/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2  932G  264G
>> >668G  29% /bricks/brick1
>> >/dev/sda1950M  235M
>> >715M  25% /boot
>> >tmpfs771M   12K
>> >771M   1% /run/user/42
>> >glusterp2:gv0/glusterp2/images   932G  273G
>> >659G  30% /var/lib/libvirt/images
>> >glusterp2:gv0932G  273G
>> >659G  30% /isos
>> >tmpfs771M   48K
>> >771M   1% /run/user/1000
>> >tmpfs771M 0
>> >771M   0% /run/user/0
>> >glusterp1.graywitch.co.nz:
>> >   Filesystem Size  Used Avail Use%
>> >Mounted on
>> > /dev/mapper/centos-root 20G  3.5G   17G  18% /
>> >   devtmpfs   3.8G 0  3.8G   0%
>> >/dev
>> >   tmpfs  3.8G   12K  3.8G   1%
>> >/dev/shm
>> >   tmpfs  3.8G  9.0M  3.8G   1%
>> >/run
>> >   tmpfs  3.8G 0  3.8G   0%
>> >/sys/fs/cgroup
>> >   /dev/sda1  969M  206M  713M  23%
>> >/boot
>> >   /dev/mapper/centos-home 50G  4.3G   46G   9%
>> >/home
>> >   /dev/mapper/centos-tmp 

Re: [Gluster-users] @devel - Why no inotify?

2018-05-22 Thread Joe Julian
The gluster client is a userspace application that connects to the 
servers and then provides the filesystem interface to the kernel using 
the fuse module. The kernel then provides a mountable filesystem.


inotify is a kernel function that watches an inode for changes. That 
function would monitor the kernel-side of the fuse interface but it 
would have no idea what happens on the userspace side. How should a 
kernel be made aware of inode changes in a cluster? Should it notify 
regardless of which client changes the inode? Since the gluster client 
isn't made aware that the kernel is monitoring an inode, the client 
would have to notify fuse of every change in the entire cluster. That's 
way too much traffic making that solution unable to scale.


I think there were some kernel hackers working on an idea for that, but 
I don't know where that's at or how far along they got.


If you wrote your application to use libgfapi you could add a watch for 
a particular file and be notified of changes (I believe, don't start 
development without verifying this in the library code).


If any of this doesn't make sense, feel free to ask more. Like 
everything when you start operating clusters at scale, it's a 
complicated problem. :)



On 05/22/18 10:05, lejeczek wrote:

how about gluste's own client(s)?
You mount volume (locally to the server) via autofs/fstab and watch 
for inotify on that mountpoing(or path inside it).

That is something I expected was out-of-box.

On 03/05/18 17:44, Joe Julian wrote:
There is the ability to notify the client already. If you developed 
against libgfapi you could do it (I think).


On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote:

    Hey,

    I thought about it a while back, haven't actually done it but I 
assume

    using inotify on the brick should work, at least in replica volumes
    (disperse probably wouldn't, you wouldn't get all events or you'd 
need

    to make sure your inotify runs on every brick). Then from there you
    could notify your clients, not ideal, but that should work.

    I agree that adding support for inotify directly into gluster 
would be
    great, but I'm not sure gluster has any mechanics for notifying 
clients
    of changes since most of the logic is in the client, as I 
understand it.


    On Thu, May 03, 2018 at 04:33:30PM +0100, lejeczek wrote:

    hi guys will we have gluster with inotify? some
    point / never? thanks, L.

    Gluster-users mailing list
    Gluster-users@gluster.org
    http://lists.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] @devel - Why no inotify?

2018-05-22 Thread lejeczek

how about gluste's own client(s)?
You mount volume (locally to the server) via autofs/fstab 
and watch for inotify on that mountpoing(or path inside it).

That is something I expected was out-of-box.

On 03/05/18 17:44, Joe Julian wrote:
There is the ability to notify the client already. If you 
developed against libgfapi you could do it (I think).


On May 3, 2018 9:28:43 AM PDT, lemonni...@ulrar.net wrote:

Hey,

I thought about it a while back, haven't actually done it but I assume
using inotify on the brick should work, at least in replica volumes
(disperse probably wouldn't, you wouldn't get all events or you'd need
to make sure your inotify runs on every brick). Then from there you
could notify your clients, not ideal, but that should work.

I agree that adding support for inotify directly into gluster would be
great, but I'm not sure gluster has any mechanics for notifying clients
of changes since most of the logic is in the client, as I understand it.

On Thu, May 03, 2018 at 04:33:30PM +0100, lejeczek wrote:

hi guys will we have gluster with inotify? some
point / never? thanks, L.

Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users




___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] split brain? but where?

2018-05-22 Thread Karthik Subrahmanya
Hi,

Which version of gluster you are using?

You can find which file is that using the following command
find  -samefile //

Please provide the getfatr output of the file which is in split brain.
The steps to recover from split-brain can be found here,
http://gluster.readthedocs.io/en/latest/Troubleshooting/resolving-splitbrain/

HTH,
Karthik

On Tue, May 22, 2018 at 4:03 AM, Joe Julian  wrote:

> How do I find what  "eafb8799-4e7a-4264-9213-26997c5a4693"  is?
>
> https://docs.gluster.org/en/v3/Troubleshooting/gfid-to-path/
>
> On May 21, 2018 3:22:01 PM PDT, Thing  wrote:
> >Hi,
> >
> >I seem to have a split brain issue, but I cannot figure out where this
> >is
> >and what it is, can someone help me pls,  I cant find what to fix here.
> >
> >==
> >root@salt-001:~# salt gluster* cmd.run 'df -h'
> >glusterp2.graywitch.co.nz:
> >Filesystem   Size  Used
> >Avail Use% Mounted on
> >/dev/mapper/centos-root   19G  3.4G
> > 16G  19% /
> >devtmpfs 3.8G 0
> >3.8G   0% /dev
> >tmpfs3.8G   12K
> >3.8G   1% /dev/shm
> >tmpfs3.8G  9.1M
> >3.8G   1% /run
> >tmpfs3.8G 0
> >3.8G   0% /sys/fs/cgroup
> >/dev/mapper/centos-tmp   3.8G   33M
> >3.7G   1% /tmp
> >/dev/mapper/centos-var19G  213M
> > 19G   2% /var
> >/dev/mapper/centos-home   47G   38M
> > 47G   1% /home
> >/dev/mapper/centos-data1 112G   33M
> >112G   1% /data1
> >/dev/mapper/centos-var_lib   9.4G  178M
> >9.2G   2% /var/lib
> >/dev/mapper/vg--gluster--prod--1--2-gluster--prod--1--2  932G  264G
> >668G  29% /bricks/brick1
> >/dev/sda1950M  235M
> >715M  25% /boot
> >tmpfs771M   12K
> >771M   1% /run/user/42
> >glusterp2:gv0/glusterp2/images   932G  273G
> >659G  30% /var/lib/libvirt/images
> >glusterp2:gv0932G  273G
> >659G  30% /isos
> >tmpfs771M   48K
> >771M   1% /run/user/1000
> >tmpfs771M 0
> >771M   0% /run/user/0
> >glusterp1.graywitch.co.nz:
> >   Filesystem Size  Used Avail Use%
> >Mounted on
> > /dev/mapper/centos-root 20G  3.5G   17G  18% /
> >   devtmpfs   3.8G 0  3.8G   0%
> >/dev
> >   tmpfs  3.8G   12K  3.8G   1%
> >/dev/shm
> >   tmpfs  3.8G  9.0M  3.8G   1%
> >/run
> >   tmpfs  3.8G 0  3.8G   0%
> >/sys/fs/cgroup
> >   /dev/sda1  969M  206M  713M  23%
> >/boot
> >   /dev/mapper/centos-home 50G  4.3G   46G   9%
> >/home
> >   /dev/mapper/centos-tmp 3.9G   33M  3.9G   1%
> >/tmp
> >   /dev/mapper/centos-data1   120G   36M  120G   1%
> >/data1
> >   /dev/mapper/vg--gluster--prod1-gluster--prod1  932G  260G  673G  28%
> >/bricks/brick1
> >   /dev/mapper/centos-var  20G  413M   20G   3%
> >/var
> >   /dev/mapper/centos00-var_lib   9.4G  179M  9.2G   2%
> >/var/lib
> >   tmpfs  771M  8.0K  771M   1%
> >/run/user/42
> >   glusterp1:gv0  932G  273G  659G  30%
> >/isos
> >   glusterp1:gv0/glusterp1/images 932G  273G  659G  30%
> >/var/lib/libvirt/images
> >glusterp3.graywitch.co.nz:
> >Filesystem   Size  Used
> >Avail Use% Mounted on
> >/dev/mapper/centos-root   20G  3.5G
> > 17G  18% /
> >devtmpfs 3.8G 0
> >3.8G   0% /dev
> >tmpfs3.8G   12K
> >3.8G   1% /dev/shm
> >tmpfs3.8G  9.0M
> >3.8G   1% /run
> >tmpfs3.8G 0
> >3.8G   0% /sys/fs/cgroup
> >/dev/sda1969M  206M
> >713M  23% /boot
> >/dev/mapper/centos-var20G  206M
> > 20G   2% /var
> >/dev/mapper/centos-tmp   3.9G   33M
> >3.9G   1% /tmp
> >/dev/mapper/centos-home