Re: [Gluster-users] About exchange disk on disperse mode in Gluster 3.7.11

2017-03-19 Thread Serkan Çoban
Stop the process that is using the brick by killing it(if it is not
already killed), then replace the disk, format the new disk and start
the proccess by running 'gluster v start vol_name force'. Check on he
node if the brick process started without problems.

On Mon, Mar 20, 2017 at 5:42 AM, Jiang Wang  wrote:
> Hi, all
>
>  I have used 3.7.11 to build a disperse volume with configure 8+4,
>
> but unfortunately,  I have one disk failed,  I want to exchange it, how can
> I
>
> do that ?
>
>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


[Gluster-users] About exchange disk on disperse mode in Gluster 3.7.11

2017-03-19 Thread Jiang Wang
Hi, all

 I have used 3.7.11 to build a disperse volume with configure 8+4,

but unfortunately,  I have one disk failed,  I want to exchange it, how can
I

do that ?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-19 Thread Deepak Naidu
Thanks Joe for your inputs. I guess comparing client -- glusterServer IO 
performance over MTLS and non-MTLS should give me some idea on the 
client/server IO overhead.

Also any URL related to setup & configuring MTLS is appreciated.



--
Deepak

On Mar 19, 2017, at 7:00 AM, Joseph Lorenzini 
mailto:jalo...@gmail.com>> wrote:

Hi Deepak,

Sorta. I think it depends on what we mean by I/O path and performance.

If we are referring to disk I/O for gluster servers, then no. If we are 
referring to the network I/O between a gluster client and server, than yes 
there will by definition be some additional overhead. That however is true of 
any security layer one chooses to pick for any application, especially a 
distributed system. In practice, security of any kind, whether its encryption, 
ACLs, or even iptables, will degrade the performance of an application. And 
since distributed systems by definition handle their state through network I/O, 
that means security + distributed system = network latency. There's a reason 
people say security is where performance goes to die. :)

Now that all said, frequently the issue is not whether there will be network 
latency, but how much and does it matter? Moreover, what are the specific 
performance requirements for your gluster pool and have they been weighed 
against the costs of meeting those requirements? Additionally, how does meeting 
those performance requirements weigh against all your other requirements like 
for example having basic network security around a distributed system?

I would be quite surprised if openssl MTLS  would be any slower compared to 
some other key-based authentication scheme. Most of the cost of TLS is around 
the TLS handshake, which is a one-time hit when the gluster client mounts the 
volume. Since the client is maintaining a persistent TLS connection, most of 
the overhead is openssl code performing symmetric encryption, which openssl, 
despite all its warts, is really really good at doing really really fast 
(understand this all relative to an arbitrary baseline :).  Bottom line: with 
modern hardware, the performance impact of MTLS should be negligible. IMHO, if 
the performance requirement can't tolerate MTLS, then its in practice 
preventing you from implementing any reasonable security scheme at all. In that 
case, you'd be better off just setting up an isolated network and skipping any 
type of authentication.

I'd recommend setting up MTLS with gluster and run your performance tests 
against it. That will definitively answer your question of whether the 
performance is acceptable. The MTLS setup is not that hard and the gluster 
documentation is reasonable though could be improved (I need to submit some PRs 
against it). if you have any questions about setup and configuration, I am sure 
I can help.

Joe

On Sat, Mar 18, 2017 at 2:25 PM, Deepak Naidu 
mailto:dna...@nvidia.com>> wrote:
Hi Joe, thanks for taking time for explaining. I am having basic set of 
requirements along with IO performance as key factor, my reply below should 
justify what I am trying to achieve.

>>If I am understanding your use case properly, you want to ensure that a 
>>client may only mount a gluster volume if and only if it presents a key or 
>>secret that attests to the client's identity, which the gluster server can 
>>use to verify that client's identity.

Yes, this is the exact use case for my requirements.



>>That's exactly what gluster MTLS is doing since the gluster server performs 
>>chain-of-trust validation on the client's leaf certificate.

That's good, but my confusion here is does this MTLS also encrypt's IO traffic 
like TLS. If yes, than it's not want I am looking for. The reason is the IO 
encryption/decryption is an extra overhead for my use case as performance of IO 
is also factor why we're are looking for GlusterFS, unless my understanding is 
incorrect that IO encryption has no overhead.



>> I don't understand why I/O path encryption is something you want to avoid -- 
>> seems like an essential part of basic network security that you get for 
>> "free" with the authentication.

If I understand the term IO path encryption correctly, all the storage IO will 
go through extra latency of encryption & decryption which is not needed for my 
requirements as this produced extra IO latency which is why I am trying to 
avoid IO path encryption & just need basic secret based authentication.




--
Deepak

> On Mar 18, 2017, at 10:46 AM, Joseph Lorenzini 
> mailto:jalo...@gmail.com>> wrote:
>
> I am little confused about what you are trying to accomplish here. If I am 
> understanding your use case properly, you want to ensure that a client may 
> only mount a gluster volume if and only if it presents a key or secret that 
> attests to the client's identity, which the gluster server can use to verify 
> that client's identity. That's exactly what gluster MTLS is doing since the 
> gluster server performs chain-of-trust validation on the c

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-19 Thread Joseph Lorenzini
Hi Deepak,

Sorta. I think it depends on what we mean by I/O path and performance.

If we are referring to disk I/O for gluster servers, then no. If we are
referring to the network I/O between a gluster client and server, than yes
there will by definition be some additional overhead. That however is true
of any security layer one chooses to pick for any application, especially a
distributed system. In practice, security of any kind, whether its
encryption, ACLs, or even iptables, will degrade the performance of an
application. And since distributed systems by definition handle their state
through network I/O, that means security + distributed system = network
latency. There's a reason people say security is where performance goes to
die. :)

Now that all said, frequently the issue is not whether there will be
network latency, but how much and does it matter? Moreover, what are the
specific performance requirements for your gluster pool and have they been
weighed against the costs of meeting those requirements? Additionally, how
does meeting those performance requirements weigh against all your other
requirements like for example having basic network security around a
distributed system?

I would be quite surprised if openssl MTLS  would be any slower compared to
some other key-based authentication scheme. Most of the cost of TLS is
around the TLS handshake, which is a one-time hit when the gluster client
mounts the volume. Since the client is maintaining a persistent TLS
connection, most of the overhead is openssl code performing symmetric
encryption, which openssl, despite all its warts, is really really good at
doing really really fast (understand this all relative to an arbitrary
baseline :).  Bottom line: with modern hardware, the performance impact of
MTLS should be negligible. IMHO, if the performance requirement can't
tolerate MTLS, then its in practice preventing you from implementing any
reasonable security scheme at all. In that case, you'd be better off just
setting up an isolated network and skipping any type of authentication.

I'd recommend setting up MTLS with gluster and run your performance tests
against it. That will definitively answer your question of whether the
performance is acceptable. The MTLS setup is not that hard and the gluster
documentation is reasonable though could be improved (I need to submit some
PRs against it). if you have any questions about setup and configuration, I
am sure I can help.

Joe

On Sat, Mar 18, 2017 at 2:25 PM, Deepak Naidu  wrote:

> Hi Joe, thanks for taking time for explaining. I am having basic set of
> requirements along with IO performance as key factor, my reply below should
> justify what I am trying to achieve.
>
> >>If I am understanding your use case properly, you want to ensure that a
> client may only mount a gluster volume if and only if it presents a key or
> secret that attests to the client's identity, which the gluster server can
> use to verify that client's identity.
>
> Yes, this is the exact use case for my requirements.
>
>
>
> >>That's exactly what gluster MTLS is doing since the gluster server
> performs chain-of-trust validation on the client's leaf certificate.
>
> That's good, but my confusion here is does this MTLS also encrypt's IO
> traffic like TLS. If yes, than it's not want I am looking for. The reason
> is the IO encryption/decryption is an extra overhead for my use case as
> performance of IO is also factor why we're are looking for GlusterFS,
> unless my understanding is incorrect that IO encryption has no overhead.
>
>
>
> >> I don't understand why I/O path encryption is something you want to
> avoid -- seems like an essential part of basic network security that you
> get for "free" with the authentication.
>
> If I understand the term IO path encryption correctly, all the storage IO
> will go through extra latency of encryption & decryption which is not
> needed for my requirements as this produced extra IO latency which is why I
> am trying to avoid IO path encryption & just need basic secret based
> authentication.
>
>
>
>
> --
> Deepak
>
> > On Mar 18, 2017, at 10:46 AM, Joseph Lorenzini 
> wrote:
> >
> > I am little confused about what you are trying to accomplish here. If I
> am understanding your use case properly, you want to ensure that a client
> may only mount a gluster volume if and only if it presents a key or secret
> that attests to the client's identity, which the gluster server can use to
> verify that client's identity. That's exactly what gluster MTLS is doing
> since the gluster server performs chain-of-trust validation on the client's
> leaf certificate.
> >
> > Of course this will necessarily force encryption of the I/O path since
> its TLS. I don't understand why I/O path encryption is something you want
> to avoid -- seems like an essential part of basic network security that you
> get for "free" with the authentication. It is true that if you want the
> key-based authentication of a gluster c

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-19 Thread Mahdi Adnan
Thank you for your email mate.


Yes, im aware of this but, to save costs i chose replica 2, this cluster is all 
flash.

In version 3.7.x i had issues with ping timeout, if one hosts went down for few 
seconds the whole cluster hangs and become unavailable, to avoid this i 
adjusted the ping timeout to 5 seconds.

As for choosing Ganesha over gfapi, VMWare does not support Gluster (FUSE or 
gfapi) im stuck with NFS for this volume.

The other volume is mounted using gfapi in oVirt cluster.




--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay 
Sent: Sunday, March 19, 2017 2:01:49 PM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

While I'm still going through the logs, just wanted to point out a couple of 
things:

1. It is recommended that you use 3-way replication (replica count 3) for VM 
store use case
2. network.ping-timeout at 5 seconds is way too low. Please change it to 30.

Is there any specific reason for using NFS-Ganesha over gfapi/FUSE?

Will get back with anything else I might find or more questions if I have any.

-Krutika

On Sun, Mar 19, 2017 at 2:36 PM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Thanks mate,

Kindly, check the attachment.


--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay mailto:kdhan...@redhat.com>>
Sent: Sunday, March 19, 2017 10:00:22 AM

To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

In that case could you share the ganesha-gfapi logs?

-Krutika

On Sun, Mar 19, 2017 at 12:13 PM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

I have two volumes, one is mounted using libgfapi for ovirt mount, the other 
one is exported via NFS-Ganesha for VMWare which is the one im testing now.


--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay mailto:kdhan...@redhat.com>>
Sent: Sunday, March 19, 2017 8:02:19 AM

To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption



On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Kindly, check the attached new log file, i dont know if it's helpful or not 
but, i couldn't find the log with the name you just described.

No. Are you using FUSE or libgfapi for accessing the volume? Or is it NFS?

-Krutika


--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay mailto:kdhan...@redhat.com>>
Sent: Saturday, March 18, 2017 6:10:40 PM

To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse mount 
logs? It should be right under /var/log/glusterfs/ directory
named after the mount point name, only hyphenated.

-Krutika

On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Hello Krutika,


Kindly, check the attached logs.


--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay mailto:kdhan...@redhat.com>>
Sent: Saturday, March 18, 2017 3:29:03 PM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

Hi Mahdi,

Could you attach mount, brick and rebalance logs?

-Krutika

On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Hi,

I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a 
volume contains few VMs.
After the completion of rebalance, i have rebooted the VMs, some of ran just 
fine, and others just crashed.
Windows boot to recovery mode and Linux throw xfs errors and does not boot.
I ran the test again and it happened just as the first one, but i have noticed 
only VMs doing disk IOs are affected by this bug.
The VMs in power off mode started fine and even md5 of the disk file did not 
change after the rebalance.

anyone else can confirm this ?


Volume info:

Volume Name: vmware2
Type: Distributed-Replicate
Volume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf
Status: Started
Snapshot Count: 0
Number of Bricks: 22 x 2 = 44
Transport-type: tcp
Bricks:
Brick1: gluster01:/mnt/disk1/vmware2
Brick2: gluster03:/mnt/disk1/vmware2
Brick3: gluster02:/mnt/disk1/vmware2
Brick4: gluster04:/mnt/disk1/vmware2
Brick5: gluster01:/mnt/disk2/vmware2
Brick6: gluster03:/mnt/disk2/vmware2
Brick7: gluster02:/mnt/disk2/vmware2
Brick8: gluster04:/mnt/disk2/vmware2
Brick9: gluster01:/mnt/disk3/vmware2
Brick10: gluster03:/mnt/disk3/vmware2
Brick11: gluster02:/mnt/disk3/vmware2
Brick12: gluster04:/mnt/disk3/vmware2
Brick13: gluster01:/mnt/disk4/vmware2
Brick14: gluster03:/mnt/disk4/vmware2
Brick15: gluster02:/mnt/disk4/vmware2
Bri

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-19 Thread Krutika Dhananjay
While I'm still going through the logs, just wanted to point out a couple
of things:

1. It is recommended that you use 3-way replication (replica count 3) for
VM store use case
2. network.ping-timeout at 5 seconds is way too low. Please change it to 30.

Is there any specific reason for using NFS-Ganesha over gfapi/FUSE?

Will get back with anything else I might find or more questions if I have
any.

-Krutika

On Sun, Mar 19, 2017 at 2:36 PM, Mahdi Adnan 
wrote:

> Thanks mate,
>
> Kindly, check the attachment.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay 
> *Sent:* Sunday, March 19, 2017 10:00:22 AM
>
> *To:* Mahdi Adnan
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>
> In that case could you share the ganesha-gfapi logs?
>
> -Krutika
>
> On Sun, Mar 19, 2017 at 12:13 PM, Mahdi Adnan 
> wrote:
>
>> I have two volumes, one is mounted using libgfapi for ovirt mount, the
>> other one is exported via NFS-Ganesha for VMWare which is the one im
>> testing now.
>>
>>
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>> --
>> *From:* Krutika Dhananjay 
>> *Sent:* Sunday, March 19, 2017 8:02:19 AM
>>
>> *To:* Mahdi Adnan
>> *Cc:* gluster-users@gluster.org
>> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>>
>>
>>
>> On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan 
>> wrote:
>>
>>> Kindly, check the attached new log file, i dont know if it's helpful or
>>> not but, i couldn't find the log with the name you just described.
>>>
>> No. Are you using FUSE or libgfapi for accessing the volume? Or is it NFS?
>>
>> -Krutika
>>
>>>
>>>
>>> --
>>>
>>> Respectfully
>>> *Mahdi A. Mahdi*
>>>
>>> --
>>> *From:* Krutika Dhananjay 
>>> *Sent:* Saturday, March 18, 2017 6:10:40 PM
>>>
>>> *To:* Mahdi Adnan
>>> *Cc:* gluster-users@gluster.org
>>> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>>>
>>> mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse
>>> mount logs? It should be right under /var/log/glusterfs/ directory
>>> named after the mount point name, only hyphenated.
>>>
>>> -Krutika
>>>
>>> On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan 
>>> wrote:
>>>
 Hello Krutika,


 Kindly, check the attached logs.



 --

 Respectfully
 *Mahdi A. Mahdi*

 --
 *From:* Krutika Dhananjay 
 *Sent:* Saturday, March 18, 2017 3:29:03 PM
 *To:* Mahdi Adnan
 *Cc:* gluster-users@gluster.org
 *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

 Hi Mahdi,

 Could you attach mount, brick and rebalance logs?

 -Krutika

 On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan 
 wrote:

> Hi,
>
> I have upgraded to Gluster 3.8.10 today and ran the add-brick
> procedure in a volume contains few VMs.
> After the completion of rebalance, i have rebooted the VMs, some of
> ran just fine, and others just crashed.
> Windows boot to recovery mode and Linux throw xfs errors and does not
> boot.
> I ran the test again and it happened just as the first one, but i have
> noticed only VMs doing disk IOs are affected by this bug.
> The VMs in power off mode started fine and even md5 of the disk file
> did not change after the rebalance.
>
> anyone else can confirm this ?
>
>
> Volume info:
>
> Volume Name: vmware2
> Type: Distributed-Replicate
> Volume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 22 x 2 = 44
> Transport-type: tcp
> Bricks:
> Brick1: gluster01:/mnt/disk1/vmware2
> Brick2: gluster03:/mnt/disk1/vmware2
> Brick3: gluster02:/mnt/disk1/vmware2
> Brick4: gluster04:/mnt/disk1/vmware2
> Brick5: gluster01:/mnt/disk2/vmware2
> Brick6: gluster03:/mnt/disk2/vmware2
> Brick7: gluster02:/mnt/disk2/vmware2
> Brick8: gluster04:/mnt/disk2/vmware2
> Brick9: gluster01:/mnt/disk3/vmware2
> Brick10: gluster03:/mnt/disk3/vmware2
> Brick11: gluster02:/mnt/disk3/vmware2
> Brick12: gluster04:/mnt/disk3/vmware2
> Brick13: gluster01:/mnt/disk4/vmware2
> Brick14: gluster03:/mnt/disk4/vmware2
> Brick15: gluster02:/mnt/disk4/vmware2
> Brick16: gluster04:/mnt/disk4/vmware2
> Brick17: gluster01:/mnt/disk5/vmware2
> Brick18: gluster03:/mnt/disk5/vmware2
> Brick19: gluster02:/mnt/disk5/vmware2
> Brick20: gluster04:/mnt/disk5/vmware2
> Brick21: gluster01:/mnt/disk6/vmware2
> Brick22: gluster03:/mnt/disk6/vmware2
> Brick23: gluster02:/mnt/disk6/vmware2
> Brick24: gluster04:/mnt/disk6/vmware2
> Brick25: gluster01:/mnt/disk7/vmware2
> Brick26: gluster03:/mnt/disk7/vmware2
> Brick27: gluster02:/mnt/disk7/vmware2
> Brick28: glust

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-19 Thread Krutika Dhananjay
In that case could you share the ganesha-gfapi logs?

-Krutika

On Sun, Mar 19, 2017 at 12:13 PM, Mahdi Adnan 
wrote:

> I have two volumes, one is mounted using libgfapi for ovirt mount, the
> other one is exported via NFS-Ganesha for VMWare which is the one im
> testing now.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay 
> *Sent:* Sunday, March 19, 2017 8:02:19 AM
>
> *To:* Mahdi Adnan
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>
>
>
> On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan 
> wrote:
>
>> Kindly, check the attached new log file, i dont know if it's helpful or
>> not but, i couldn't find the log with the name you just described.
>>
> No. Are you using FUSE or libgfapi for accessing the volume? Or is it NFS?
>
> -Krutika
>
>>
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>> --
>> *From:* Krutika Dhananjay 
>> *Sent:* Saturday, March 18, 2017 6:10:40 PM
>>
>> *To:* Mahdi Adnan
>> *Cc:* gluster-users@gluster.org
>> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>>
>> mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse
>> mount logs? It should be right under /var/log/glusterfs/ directory
>> named after the mount point name, only hyphenated.
>>
>> -Krutika
>>
>> On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan 
>> wrote:
>>
>>> Hello Krutika,
>>>
>>>
>>> Kindly, check the attached logs.
>>>
>>>
>>>
>>> --
>>>
>>> Respectfully
>>> *Mahdi A. Mahdi*
>>>
>>> --
>>> *From:* Krutika Dhananjay 
>>> *Sent:* Saturday, March 18, 2017 3:29:03 PM
>>> *To:* Mahdi Adnan
>>> *Cc:* gluster-users@gluster.org
>>> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>>>
>>> Hi Mahdi,
>>>
>>> Could you attach mount, brick and rebalance logs?
>>>
>>> -Krutika
>>>
>>> On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan 
>>> wrote:
>>>
 Hi,

 I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure
 in a volume contains few VMs.
 After the completion of rebalance, i have rebooted the VMs, some of ran
 just fine, and others just crashed.
 Windows boot to recovery mode and Linux throw xfs errors and does not
 boot.
 I ran the test again and it happened just as the first one, but i have
 noticed only VMs doing disk IOs are affected by this bug.
 The VMs in power off mode started fine and even md5 of the disk file
 did not change after the rebalance.

 anyone else can confirm this ?


 Volume info:

 Volume Name: vmware2
 Type: Distributed-Replicate
 Volume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf
 Status: Started
 Snapshot Count: 0
 Number of Bricks: 22 x 2 = 44
 Transport-type: tcp
 Bricks:
 Brick1: gluster01:/mnt/disk1/vmware2
 Brick2: gluster03:/mnt/disk1/vmware2
 Brick3: gluster02:/mnt/disk1/vmware2
 Brick4: gluster04:/mnt/disk1/vmware2
 Brick5: gluster01:/mnt/disk2/vmware2
 Brick6: gluster03:/mnt/disk2/vmware2
 Brick7: gluster02:/mnt/disk2/vmware2
 Brick8: gluster04:/mnt/disk2/vmware2
 Brick9: gluster01:/mnt/disk3/vmware2
 Brick10: gluster03:/mnt/disk3/vmware2
 Brick11: gluster02:/mnt/disk3/vmware2
 Brick12: gluster04:/mnt/disk3/vmware2
 Brick13: gluster01:/mnt/disk4/vmware2
 Brick14: gluster03:/mnt/disk4/vmware2
 Brick15: gluster02:/mnt/disk4/vmware2
 Brick16: gluster04:/mnt/disk4/vmware2
 Brick17: gluster01:/mnt/disk5/vmware2
 Brick18: gluster03:/mnt/disk5/vmware2
 Brick19: gluster02:/mnt/disk5/vmware2
 Brick20: gluster04:/mnt/disk5/vmware2
 Brick21: gluster01:/mnt/disk6/vmware2
 Brick22: gluster03:/mnt/disk6/vmware2
 Brick23: gluster02:/mnt/disk6/vmware2
 Brick24: gluster04:/mnt/disk6/vmware2
 Brick25: gluster01:/mnt/disk7/vmware2
 Brick26: gluster03:/mnt/disk7/vmware2
 Brick27: gluster02:/mnt/disk7/vmware2
 Brick28: gluster04:/mnt/disk7/vmware2
 Brick29: gluster01:/mnt/disk8/vmware2
 Brick30: gluster03:/mnt/disk8/vmware2
 Brick31: gluster02:/mnt/disk8/vmware2
 Brick32: gluster04:/mnt/disk8/vmware2
 Brick33: gluster01:/mnt/disk9/vmware2
 Brick34: gluster03:/mnt/disk9/vmware2
 Brick35: gluster02:/mnt/disk9/vmware2
 Brick36: gluster04:/mnt/disk9/vmware2
 Brick37: gluster01:/mnt/disk10/vmware2
 Brick38: gluster03:/mnt/disk10/vmware2
 Brick39: gluster02:/mnt/disk10/vmware2
 Brick40: gluster04:/mnt/disk10/vmware2
 Brick41: gluster01:/mnt/disk11/vmware2
 Brick42: gluster03:/mnt/disk11/vmware2
 Brick43: gluster02:/mnt/disk11/vmware2
 Brick44: gluster04:/mnt/disk11/vmware2
 Options Reconfigured:
 cluster.server-quorum-type: server
 nfs.disable: on
 performance.readdir-ahead: on
 transport.address-family: inet
 performance.quick-read: off
 performance.read-ahead: off
 per