Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Mahdi Adnan
I have two volumes, one is mounted using libgfapi for ovirt mount, the other 
one is exported via NFS-Ganesha for VMWare which is the one im testing now.


--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay 
Sent: Sunday, March 19, 2017 8:02:19 AM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption



On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Kindly, check the attached new log file, i dont know if it's helpful or not 
but, i couldn't find the log with the name you just described.

No. Are you using FUSE or libgfapi for accessing the volume? Or is it NFS?

-Krutika


--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay mailto:kdhan...@redhat.com>>
Sent: Saturday, March 18, 2017 6:10:40 PM

To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse mount 
logs? It should be right under /var/log/glusterfs/ directory
named after the mount point name, only hyphenated.

-Krutika

On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Hello Krutika,


Kindly, check the attached logs.


--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay mailto:kdhan...@redhat.com>>
Sent: Saturday, March 18, 2017 3:29:03 PM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

Hi Mahdi,

Could you attach mount, brick and rebalance logs?

-Krutika

On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Hi,

I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in a 
volume contains few VMs.
After the completion of rebalance, i have rebooted the VMs, some of ran just 
fine, and others just crashed.
Windows boot to recovery mode and Linux throw xfs errors and does not boot.
I ran the test again and it happened just as the first one, but i have noticed 
only VMs doing disk IOs are affected by this bug.
The VMs in power off mode started fine and even md5 of the disk file did not 
change after the rebalance.

anyone else can confirm this ?


Volume info:

Volume Name: vmware2
Type: Distributed-Replicate
Volume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf
Status: Started
Snapshot Count: 0
Number of Bricks: 22 x 2 = 44
Transport-type: tcp
Bricks:
Brick1: gluster01:/mnt/disk1/vmware2
Brick2: gluster03:/mnt/disk1/vmware2
Brick3: gluster02:/mnt/disk1/vmware2
Brick4: gluster04:/mnt/disk1/vmware2
Brick5: gluster01:/mnt/disk2/vmware2
Brick6: gluster03:/mnt/disk2/vmware2
Brick7: gluster02:/mnt/disk2/vmware2
Brick8: gluster04:/mnt/disk2/vmware2
Brick9: gluster01:/mnt/disk3/vmware2
Brick10: gluster03:/mnt/disk3/vmware2
Brick11: gluster02:/mnt/disk3/vmware2
Brick12: gluster04:/mnt/disk3/vmware2
Brick13: gluster01:/mnt/disk4/vmware2
Brick14: gluster03:/mnt/disk4/vmware2
Brick15: gluster02:/mnt/disk4/vmware2
Brick16: gluster04:/mnt/disk4/vmware2
Brick17: gluster01:/mnt/disk5/vmware2
Brick18: gluster03:/mnt/disk5/vmware2
Brick19: gluster02:/mnt/disk5/vmware2
Brick20: gluster04:/mnt/disk5/vmware2
Brick21: gluster01:/mnt/disk6/vmware2
Brick22: gluster03:/mnt/disk6/vmware2
Brick23: gluster02:/mnt/disk6/vmware2
Brick24: gluster04:/mnt/disk6/vmware2
Brick25: gluster01:/mnt/disk7/vmware2
Brick26: gluster03:/mnt/disk7/vmware2
Brick27: gluster02:/mnt/disk7/vmware2
Brick28: gluster04:/mnt/disk7/vmware2
Brick29: gluster01:/mnt/disk8/vmware2
Brick30: gluster03:/mnt/disk8/vmware2
Brick31: gluster02:/mnt/disk8/vmware2
Brick32: gluster04:/mnt/disk8/vmware2
Brick33: gluster01:/mnt/disk9/vmware2
Brick34: gluster03:/mnt/disk9/vmware2
Brick35: gluster02:/mnt/disk9/vmware2
Brick36: gluster04:/mnt/disk9/vmware2
Brick37: gluster01:/mnt/disk10/vmware2
Brick38: gluster03:/mnt/disk10/vmware2
Brick39: gluster02:/mnt/disk10/vmware2
Brick40: gluster04:/mnt/disk10/vmware2
Brick41: gluster01:/mnt/disk11/vmware2
Brick42: gluster03:/mnt/disk11/vmware2
Brick43: gluster02:/mnt/disk11/vmware2
Brick44: gluster04:/mnt/disk11/vmware2
Options Reconfigured:
cluster.server-quorum-type: server
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
features.shard: on
cluster.data-self-heal-algorithm: full
features.cache-invalidation: on
ganesha.enable: on
features.shard-block-size: 256MB
client.event-threads: 2
server.event-threads: 2
cluster.favorite-child-policy: size
storage.build-pgfid: off
network.ping-timeout: 5
cluster.enable-shared-storage: enable
nfs-ganesha: enable
cluster.server-quorum-ratio: 51%


Adding bricks:
gluster volume add-brick vmware2

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Dev Sidious
Unfortunately, gandalf is precisely right with the point he made on data
consistency in GlusterFS.

> If gluster isn't able to ensure data consistency when doing it's
> primary role, scaling up a storage, i'm sorry but it can't be
> considered "enterprise" ready or production ready.

In my short experience with GlusterFS I have known it to fail PRECISELY
on data consistency (data representation consistency to be more
precise). Namely:

a) files partially or not at all replicated due to
b) errors such as: "Transport endpoint not connected"

with more or less random frequency.

I solved all these by disabling SSL. Since I disabled SSL, the system
APPEARS to be reliable.

To me, a system exhibiting such a behavior is not a solid system.

If it's "production ready" or not, now that's a more subjective topic
and I will leave it to the arm chair computer scientists and the
philosophers.




On 3/19/2017 12:53 AM, Krutika Dhananjay wrote:
> 
> 
> On Sat, Mar 18, 2017 at 11:15 PM, Gandalf Corvotempesta
>  > wrote:
> 
> Krutika, it wasn't an attack directly to you.
> It wasn't an attack at all.
> 
> 
> Gluster is a "SCALE-OUT" software defined storage, the folllowing is
> wrote in the middle of the homepage:
> "GlusterFS is a scalable network filesystem"
> 
> So, scaling a cluster is one of the primary goal of gluster.
> 
> A critical bug that prevent gluster from being scaled without loosing
> data was discovered 1 year ago, and took 1 year to be fixed. 
> 
> 
> If gluster isn't able to ensure data consistency when doing it's
> primary role, scaling up a storage, i'm sorry but it can't be
> considered "enterprise" ready or production ready.
> 
> 
> That's not entirely true. VM use-case is just one of the many workloads
> users
> use Gluster for. I think I've clarified this before. The bug was in
> dht-shard interaction.
> And shard is *only* supported in VM use-case as of today. This means that
> scaling out has been working fine on all but the VM use-case.
> That doesn't mean that Gluster is not production-ready. At least users
> who've deployed Gluster
> in non-VM use-cases haven't complained of add-brick not working in the
> recent past.
> 
> 
> -Krutika
>  
> 
> Maybe SOHO for small offices or home users, but in enterprises, data
> consistency and reliability is the most important thing and gluster
> isn't able to guarantee this even
> doing a very basic routine procedure that should be considered as the
> basis of the whole gluster project (as wrote on gluster's homepage)
> 
> 
> 2017-03-18 14:21 GMT+01:00 Krutika Dhananjay  >:
> >
> >
> > On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta
> >  > wrote:
> >>
> >> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson
> mailto:lindsay.mathie...@gmail.com>>:
> >> > Concerning, this was supposed to be fixed in 3.8.10
> >>
> >> Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
> 
> >> Now let's see how much time they require to fix another CRITICAL bug.
> >>
> >> I'm really curious.
> >
> >
> > Hey Gandalf!
> >
> > Let's see. There have been plenty of occasions where I've sat and
> worked on
> > users' issues on weekends.
> > And then again, I've got a life too outside of work (or at least I'm
> > supposed to), you know.
> > (And hey you know what! Today is Saturday and I'm sitting here and
> > responding to your mail and collecting information
> > on Mahdi's issue. Nobody asked me to look into it. I checked the
> mail and I
> > had a choice to ignore it and not look into it until Monday.)
> >
> > Is there a genuine problem Mahdi is facing? Without a doubt!
> >
> > Got a constructive feedback to give? Please do.
> > Do you want to give back to the community and help improve
> GlusterFS? There
> > are plenty of ways to do that.
> > One of them is testing out the releases and providing feedback.
> Sharding
> > wouldn't have worked today, if not for Lindsay's timely
> > and regular feedback in several 3.7.x releases.
> >
> > But this kind of criticism doesn't help.
> >
> > Also, spending time on users' issues is only one of the many
> > responsibilities we have as developers.
> > So what you see on mailing lists is just the tip of the iceberg.
> >
> > I have personally tried several times to recreate the add-brick
> bug on 3
> > machines I borrowed from Kaleb. I haven't had success in
> recreating it.
> > Reproducing VM-related bugs, in my experience, wasn't easy. I
> don't use
> > Proxmox. Lindsay and Kevin did. There are a myriad qemu options
> used when
> > launching vms. Different VM management project

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Krutika Dhananjay
On Sat, Mar 18, 2017 at 10:36 PM, Mahdi Adnan 
wrote:

> Kindly, check the attached new log file, i dont know if it's helpful or
> not but, i couldn't find the log with the name you just described.
>
No. Are you using FUSE or libgfapi for accessing the volume? Or is it NFS?

-Krutika

>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay 
> *Sent:* Saturday, March 18, 2017 6:10:40 PM
>
> *To:* Mahdi Adnan
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>
> mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse
> mount logs? It should be right under /var/log/glusterfs/ directory
> named after the mount point name, only hyphenated.
>
> -Krutika
>
> On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan 
> wrote:
>
>> Hello Krutika,
>>
>>
>> Kindly, check the attached logs.
>>
>>
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>> --
>> *From:* Krutika Dhananjay 
>> *Sent:* Saturday, March 18, 2017 3:29:03 PM
>> *To:* Mahdi Adnan
>> *Cc:* gluster-users@gluster.org
>> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>>
>> Hi Mahdi,
>>
>> Could you attach mount, brick and rebalance logs?
>>
>> -Krutika
>>
>> On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan 
>> wrote:
>>
>>> Hi,
>>>
>>> I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure
>>> in a volume contains few VMs.
>>> After the completion of rebalance, i have rebooted the VMs, some of ran
>>> just fine, and others just crashed.
>>> Windows boot to recovery mode and Linux throw xfs errors and does not
>>> boot.
>>> I ran the test again and it happened just as the first one, but i have
>>> noticed only VMs doing disk IOs are affected by this bug.
>>> The VMs in power off mode started fine and even md5 of the disk file did
>>> not change after the rebalance.
>>>
>>> anyone else can confirm this ?
>>>
>>>
>>> Volume info:
>>>
>>> Volume Name: vmware2
>>> Type: Distributed-Replicate
>>> Volume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 22 x 2 = 44
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: gluster01:/mnt/disk1/vmware2
>>> Brick2: gluster03:/mnt/disk1/vmware2
>>> Brick3: gluster02:/mnt/disk1/vmware2
>>> Brick4: gluster04:/mnt/disk1/vmware2
>>> Brick5: gluster01:/mnt/disk2/vmware2
>>> Brick6: gluster03:/mnt/disk2/vmware2
>>> Brick7: gluster02:/mnt/disk2/vmware2
>>> Brick8: gluster04:/mnt/disk2/vmware2
>>> Brick9: gluster01:/mnt/disk3/vmware2
>>> Brick10: gluster03:/mnt/disk3/vmware2
>>> Brick11: gluster02:/mnt/disk3/vmware2
>>> Brick12: gluster04:/mnt/disk3/vmware2
>>> Brick13: gluster01:/mnt/disk4/vmware2
>>> Brick14: gluster03:/mnt/disk4/vmware2
>>> Brick15: gluster02:/mnt/disk4/vmware2
>>> Brick16: gluster04:/mnt/disk4/vmware2
>>> Brick17: gluster01:/mnt/disk5/vmware2
>>> Brick18: gluster03:/mnt/disk5/vmware2
>>> Brick19: gluster02:/mnt/disk5/vmware2
>>> Brick20: gluster04:/mnt/disk5/vmware2
>>> Brick21: gluster01:/mnt/disk6/vmware2
>>> Brick22: gluster03:/mnt/disk6/vmware2
>>> Brick23: gluster02:/mnt/disk6/vmware2
>>> Brick24: gluster04:/mnt/disk6/vmware2
>>> Brick25: gluster01:/mnt/disk7/vmware2
>>> Brick26: gluster03:/mnt/disk7/vmware2
>>> Brick27: gluster02:/mnt/disk7/vmware2
>>> Brick28: gluster04:/mnt/disk7/vmware2
>>> Brick29: gluster01:/mnt/disk8/vmware2
>>> Brick30: gluster03:/mnt/disk8/vmware2
>>> Brick31: gluster02:/mnt/disk8/vmware2
>>> Brick32: gluster04:/mnt/disk8/vmware2
>>> Brick33: gluster01:/mnt/disk9/vmware2
>>> Brick34: gluster03:/mnt/disk9/vmware2
>>> Brick35: gluster02:/mnt/disk9/vmware2
>>> Brick36: gluster04:/mnt/disk9/vmware2
>>> Brick37: gluster01:/mnt/disk10/vmware2
>>> Brick38: gluster03:/mnt/disk10/vmware2
>>> Brick39: gluster02:/mnt/disk10/vmware2
>>> Brick40: gluster04:/mnt/disk10/vmware2
>>> Brick41: gluster01:/mnt/disk11/vmware2
>>> Brick42: gluster03:/mnt/disk11/vmware2
>>> Brick43: gluster02:/mnt/disk11/vmware2
>>> Brick44: gluster04:/mnt/disk11/vmware2
>>> Options Reconfigured:
>>> cluster.server-quorum-type: server
>>> nfs.disable: on
>>> performance.readdir-ahead: on
>>> transport.address-family: inet
>>> performance.quick-read: off
>>> performance.read-ahead: off
>>> performance.io-cache: off
>>> performance.stat-prefetch: off
>>> cluster.eager-lock: enable
>>> network.remote-dio: enable
>>> features.shard: on
>>> cluster.data-self-heal-algorithm: full
>>> features.cache-invalidation: on
>>> ganesha.enable: on
>>> features.shard-block-size: 256MB
>>> client.event-threads: 2
>>> server.event-threads: 2
>>> cluster.favorite-child-policy: size
>>> storage.build-pgfid: off
>>> network.ping-timeout: 5
>>> cluster.enable-shared-storage: enable
>>> nfs-ganesha: enable
>>> cluster.server-quorum-ratio: 51%
>>>
>>>
>>> Adding bricks:
>>> gluster volume add-brick vmware2 replica 2 gluster01:/mnt/disk11/vmware2
>>> gluster03:/mnt/disk11/vmware2 gluster02:/mnt/disk11/vmware2
>>> gl

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Krutika Dhananjay
On Sat, Mar 18, 2017 at 11:15 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Krutika, it wasn't an attack directly to you.
> It wasn't an attack at all.
>

> Gluster is a "SCALE-OUT" software defined storage, the folllowing is
> wrote in the middle of the homepage:
> "GlusterFS is a scalable network filesystem"
>
> So, scaling a cluster is one of the primary goal of gluster.
>
> A critical bug that prevent gluster from being scaled without loosing
> data was discovered 1 year ago, and took 1 year to be fixed.
>

> If gluster isn't able to ensure data consistency when doing it's
> primary role, scaling up a storage, i'm sorry but it can't be
> considered "enterprise" ready or production ready.
>

That's not entirely true. VM use-case is just one of the many workloads
users
use Gluster for. I think I've clarified this before. The bug was in
dht-shard interaction.
And shard is *only* supported in VM use-case as of today. This means that
scaling out has been working fine on all but the VM use-case.
That doesn't mean that Gluster is not production-ready. At least users
who've deployed Gluster
in non-VM use-cases haven't complained of add-brick not working in the
recent past.


-Krutika


> Maybe SOHO for small offices or home users, but in enterprises, data
> consistency and reliability is the most important thing and gluster
> isn't able to guarantee this even
> doing a very basic routine procedure that should be considered as the
> basis of the whole gluster project (as wrote on gluster's homepage)
>
>
> 2017-03-18 14:21 GMT+01:00 Krutika Dhananjay :
> >
> >
> > On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta
> >  wrote:
> >>
> >> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson <
> lindsay.mathie...@gmail.com>:
> >> > Concerning, this was supposed to be fixed in 3.8.10
> >>
> >> Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
> >> Now let's see how much time they require to fix another CRITICAL bug.
> >>
> >> I'm really curious.
> >
> >
> > Hey Gandalf!
> >
> > Let's see. There have been plenty of occasions where I've sat and worked
> on
> > users' issues on weekends.
> > And then again, I've got a life too outside of work (or at least I'm
> > supposed to), you know.
> > (And hey you know what! Today is Saturday and I'm sitting here and
> > responding to your mail and collecting information
> > on Mahdi's issue. Nobody asked me to look into it. I checked the mail
> and I
> > had a choice to ignore it and not look into it until Monday.)
> >
> > Is there a genuine problem Mahdi is facing? Without a doubt!
> >
> > Got a constructive feedback to give? Please do.
> > Do you want to give back to the community and help improve GlusterFS?
> There
> > are plenty of ways to do that.
> > One of them is testing out the releases and providing feedback. Sharding
> > wouldn't have worked today, if not for Lindsay's timely
> > and regular feedback in several 3.7.x releases.
> >
> > But this kind of criticism doesn't help.
> >
> > Also, spending time on users' issues is only one of the many
> > responsibilities we have as developers.
> > So what you see on mailing lists is just the tip of the iceberg.
> >
> > I have personally tried several times to recreate the add-brick bug on 3
> > machines I borrowed from Kaleb. I haven't had success in recreating it.
> > Reproducing VM-related bugs, in my experience, wasn't easy. I don't use
> > Proxmox. Lindsay and Kevin did. There are a myriad qemu options used when
> > launching vms. Different VM management projects (ovirt/Proxmox) use
> > different defaults for these options. There are too many variables to be
> > considered
> > when debugging or trying to simulate the users' test.
> >
> > It's why I asked for Mahdi's help before 3.8.10 was out for feedback on
> the
> > fix:
> > http://lists.gluster.org/pipermail/gluster-users/2017-
> February/030112.html
> >
> > Alright. That's all I had to say.
> >
> > Happy weekend to you!
> >
> > -Krutika
> >
> >> ___
> >> Gluster-users mailing list
> >> Gluster-users@gluster.org
> >> http://lists.gluster.org/mailman/listinfo/gluster-users
> >
> >
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Deepak Naidu
Hi Joe, thanks for taking time for explaining. I am having basic set of 
requirements along with IO performance as key factor, my reply below should 
justify what I am trying to achieve.

>>If I am understanding your use case properly, you want to ensure that a 
>>client may only mount a gluster volume if and only if it presents a key or 
>>secret that attests to the client's identity, which the gluster server can 
>>use to verify that client's identity. 

Yes, this is the exact use case for my requirements.



>>That's exactly what gluster MTLS is doing since the gluster server performs 
>>chain-of-trust validation on the client's leaf certificate.

That's good, but my confusion here is does this MTLS also encrypt's IO traffic 
like TLS. If yes, than it's not want I am looking for. The reason is the IO 
encryption/decryption is an extra overhead for my use case as performance of IO 
is also factor why we're are looking for GlusterFS, unless my understanding is 
incorrect that IO encryption has no overhead.



>> I don't understand why I/O path encryption is something you want to avoid -- 
>> seems like an essential part of basic network security that you get for 
>> "free" with the authentication. 

If I understand the term IO path encryption correctly, all the storage IO will 
go through extra latency of encryption & decryption which is not needed for my 
requirements as this produced extra IO latency which is why I am trying to 
avoid IO path encryption & just need basic secret based authentication.




--
Deepak

> On Mar 18, 2017, at 10:46 AM, Joseph Lorenzini  wrote:
> 
> I am little confused about what you are trying to accomplish here. If I am 
> understanding your use case properly, you want to ensure that a client may 
> only mount a gluster volume if and only if it presents a key or secret that 
> attests to the client's identity, which the gluster server can use to verify 
> that client's identity. That's exactly what gluster MTLS is doing since the 
> gluster server performs chain-of-trust validation on the client's leaf 
> certificate.
> 
> Of course this will necessarily force encryption of the I/O path since its 
> TLS. I don't understand why I/O path encryption is something you want to 
> avoid -- seems like an essential part of basic network security that you get 
> for "free" with the authentication. It is true that if you want the key-based 
> authentication of a gluster client, you will need gluster MTLS. You could 
> treat encryption as the "cost" of getting authentication if you will.
> 
> Now I am personally pretty negative on X.509 and chain-of-trust in general, 
> since the trust model has been proven to not scale and is frequently broken 
> by malicious and incompetent CAs. I also think its a completely inappropriate 
> security model for something like gluster where all endpoints are known and 
> controlled by a single entity, forcing a massive amount of unnecessary 
> complexity with certificate management with no real added security. I have 
> thought about making a feature request that gluster support a simple 
> public-key encryption that's implemented like SSH. But all that said, MTLS is 
> a well-tested, well known security protocol and the gluster team built it on 
> top of openssl so it does get the security job done in an acceptable way. The 
> fact that the I/O path is encrypted is not the thing that bothers me about 
> the implementation though.
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Joseph Lorenzini
Hi Deepak,

I am little confused about what you are trying to accomplish here. If I am
understanding your use case properly, you want to ensure that a client may
only mount a gluster volume if and only if it presents a key or secret that
attests to the client's identity, which the gluster server can use to
verify that client's identity. That's exactly what gluster MTLS is doing
since the gluster server performs chain-of-trust validation on the client's
leaf certificate.

Of course this will necessarily force encryption of the I/O path since its
TLS. I don't understand why I/O path encryption is something you want to
avoid -- seems like an essential part of basic network security that you
get for "free" with the authentication. It is true that if you want the
key-based authentication of a gluster client, you will need gluster MTLS.
You could treat encryption as the "cost" of getting authentication if you
will.

Now I am personally pretty negative on X.509 and chain-of-trust in general,
since the trust model has been proven to not scale and is frequently broken
by malicious and incompetent CAs. I also think its a completely
inappropriate security model for something like gluster where all endpoints
are known and controlled by a single entity, forcing a massive amount of
unnecessary complexity with certificate management with no real added
security. I have thought about making a feature request that gluster
support a simple public-key encryption that's implemented like SSH. But all
that said, MTLS is a well-tested, well known security protocol and the
gluster team built it on top of openssl so it does get the security job
done in an acceptable way. The fact that the I/O path is encrypted is not
the thing that bothers me about the implementation though.


Joe

On Sat, Mar 18, 2017 at 11:57 AM, Deepak Naidu  wrote:

> Thanks Joseph for info.
>
> >>In addition, gluster uses MTLS (each endpoint validate's the other's
> chain-of-trust), so you get authentication as well.
>
> Does it only do authentication of mounts. I am not interested at this
> moment on IO path encryption only looking for authentication.
>
> >>you can set the auth.allow and auth.reject options to whitelist and
> blacklist clients based on their source IPs.
>
> I have used this but unfortunately I don't see ipbased / hostbased ACL as
> matured method, unless GlusterFS supports Kerberos completely. The reason I
> am looking for key or secret based secured mounts is, there will be entire
> subnet granted to the service & more elegant way is to allow only the
> client on that subnet to gluster mount would be if they use keys/secret as
> the client might next cycle/reboot get different IP. I can find workaround
> related to IP but this seems really weird that gluster only uses SSL to
> encrypt IO traffic but not use the same for authenticated mount.
>
>
>
> --
> Deepak
>
> > On Mar 18, 2017, at 9:14 AM, Joseph Lorenzini  wrote:
> >
> >
> > Hi Deepak,
> >
> > Here's the TLDR
> >
> > If you don't want the I/O path to be encrypted but you want to control
> access to a gluster volume, you can set the auth.allow and auth.reject
> options to whitelist and blacklist clients based on their source IPs.
> There's also always iptables rules if you don't want to do that.
> >
> > Note this only address a client's (i.e system where multiple unix users
> can exist) to mount a gluster volume. This does not address access by
> different unix users on that mounted gluster volume -- that's a separate
> and much more complicated issue. I can elaborate on that more if you want.
> >
> > Here's the longer explanation on the TLS piece.
> >
> > So there are a couple different security layers here. TLS will in fact
> encrypt the I/O path -- that's one of its key selling points which is to
> ensure confidentiality of the data sent between the gluster server and
> gluster client. In addition, gluster uses MTLS (each endpoint validate's
> the other's chain-of-trust), so you get authentication as well.
> Additionally, if you set the auth.ssl-allow option on the gluster volume,
> you can specify whether authenticated TLS client is permitted to access the
> volume based on the common name in the client's certificate. This provides
> an inflexible but reasonably strong form of authorization.
> >
> >
> 
> ---
> This email message is for the sole use of the intended recipient(s) and
> may contain
> confidential information.  Any unauthorized review, use, disclosure or
> distribution
> is prohibited.  If you are not the intended recipient, please contact the
> sender by
> reply email and destroy all copies of the original message.
> 
> ---
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Gandalf Corvotempesta
Krutika, it wasn't an attack directly to you.
It wasn't an attack at all.

Gluster is a "SCALE-OUT" software defined storage, the folllowing is
wrote in the middle of the homepage:
"GlusterFS is a scalable network filesystem"

So, scaling a cluster is one of the primary goal of gluster.

A critical bug that prevent gluster from being scaled without loosing
data was discovered 1 year ago, and took 1 year to be fixed.

If gluster isn't able to ensure data consistency when doing it's
primary role, scaling up a storage, i'm sorry but it can't be
considered "enterprise" ready or production ready.
Maybe SOHO for small offices or home users, but in enterprises, data
consistency and reliability is the most important thing and gluster
isn't able to guarantee this even
doing a very basic routine procedure that should be considered as the
basis of the whole gluster project (as wrote on gluster's homepage)


2017-03-18 14:21 GMT+01:00 Krutika Dhananjay :
>
>
> On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta
>  wrote:
>>
>> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson :
>> > Concerning, this was supposed to be fixed in 3.8.10
>>
>> Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
>> Now let's see how much time they require to fix another CRITICAL bug.
>>
>> I'm really curious.
>
>
> Hey Gandalf!
>
> Let's see. There have been plenty of occasions where I've sat and worked on
> users' issues on weekends.
> And then again, I've got a life too outside of work (or at least I'm
> supposed to), you know.
> (And hey you know what! Today is Saturday and I'm sitting here and
> responding to your mail and collecting information
> on Mahdi's issue. Nobody asked me to look into it. I checked the mail and I
> had a choice to ignore it and not look into it until Monday.)
>
> Is there a genuine problem Mahdi is facing? Without a doubt!
>
> Got a constructive feedback to give? Please do.
> Do you want to give back to the community and help improve GlusterFS? There
> are plenty of ways to do that.
> One of them is testing out the releases and providing feedback. Sharding
> wouldn't have worked today, if not for Lindsay's timely
> and regular feedback in several 3.7.x releases.
>
> But this kind of criticism doesn't help.
>
> Also, spending time on users' issues is only one of the many
> responsibilities we have as developers.
> So what you see on mailing lists is just the tip of the iceberg.
>
> I have personally tried several times to recreate the add-brick bug on 3
> machines I borrowed from Kaleb. I haven't had success in recreating it.
> Reproducing VM-related bugs, in my experience, wasn't easy. I don't use
> Proxmox. Lindsay and Kevin did. There are a myriad qemu options used when
> launching vms. Different VM management projects (ovirt/Proxmox) use
> different defaults for these options. There are too many variables to be
> considered
> when debugging or trying to simulate the users' test.
>
> It's why I asked for Mahdi's help before 3.8.10 was out for feedback on the
> fix:
> http://lists.gluster.org/pipermail/gluster-users/2017-February/030112.html
>
> Alright. That's all I had to say.
>
> Happy weekend to you!
>
> -Krutika
>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Deepak Naidu
Thanks Joseph for info. 

>>In addition, gluster uses MTLS (each endpoint validate's the other's 
>>chain-of-trust), so you get authentication as well.

Does it only do authentication of mounts. I am not interested at this moment on 
IO path encryption only looking for authentication.

>>you can set the auth.allow and auth.reject options to whitelist and blacklist 
>>clients based on their source IPs.

I have used this but unfortunately I don't see ipbased / hostbased ACL as 
matured method, unless GlusterFS supports Kerberos completely. The reason I am 
looking for key or secret based secured mounts is, there will be entire subnet 
granted to the service & more elegant way is to allow only the client on that 
subnet to gluster mount would be if they use keys/secret as the client might 
next cycle/reboot get different IP. I can find workaround related to IP but 
this seems really weird that gluster only uses SSL to encrypt IO traffic but 
not use the same for authenticated mount.



--
Deepak

> On Mar 18, 2017, at 9:14 AM, Joseph Lorenzini  wrote:
> 
> 
> Hi Deepak,
> 
> Here's the TLDR
> 
> If you don't want the I/O path to be encrypted but you want to control access 
> to a gluster volume, you can set the auth.allow and auth.reject options to 
> whitelist and blacklist clients based on their source IPs. There's also 
> always iptables rules if you don't want to do that.
> 
> Note this only address a client's (i.e system where multiple unix users can 
> exist) to mount a gluster volume. This does not address access by different 
> unix users on that mounted gluster volume -- that's a separate and much more 
> complicated issue. I can elaborate on that more if you want. 
> 
> Here's the longer explanation on the TLS piece. 
> 
> So there are a couple different security layers here. TLS will in fact 
> encrypt the I/O path -- that's one of its key selling points which is to 
> ensure confidentiality of the data sent between the gluster server and 
> gluster client. In addition, gluster uses MTLS (each endpoint validate's the 
> other's chain-of-trust), so you get authentication as well. Additionally, if 
> you set the auth.ssl-allow option on the gluster volume, you can specify 
> whether authenticated TLS client is permitted to access the volume based on 
> the common name in the client's certificate. This provides an inflexible but 
> reasonably strong form of authorization.
> 
> 
---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Secured mount in GlusterFS using keys

2017-03-18 Thread Joseph Lorenzini
Hi Deepak,

Here's the TLDR

If you don't want the I/O path to be encrypted but you want to control
access to a gluster volume, you can set the auth.allow and auth.reject
options to whitelist and blacklist clients based on their source IPs.
There's also always iptables rules if you don't want to do that.

Note this only address a client's (i.e system where multiple unix users can
exist) to mount a gluster volume. This does not address access by different
unix users on that mounted gluster volume -- that's a separate and much
more complicated issue. I can elaborate on that more if you want.

Here's the longer explanation on the TLS piece.

So there are a couple different security layers here. TLS will in fact
encrypt the I/O path -- that's one of its key selling points which is to
ensure confidentiality of the data sent between the gluster server and
gluster client. In addition, gluster uses MTLS (each endpoint validate's
the other's chain-of-trust), so you get authentication as well.
Additionally, if you set the auth.ssl-allow option on the gluster volume,
you can specify whether authenticated TLS client is permitted to access the
volume based on the common name in the client's certificate. This provides
an inflexible but reasonably strong form of authorization.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Why nodeid==1 need to be checked and dealt with specially in "fuse-bridge.c"?

2017-03-18 Thread Amar Tumballi
On Wed, Mar 15, 2017 at 9:17 PM, Zhitao Li  wrote:

> Hello, everyone,
>
>
> I have been trying to optimize *"ls" performance* for Glusterfs recently. My
> volume is disperse(48 bricks  with redundancy 16), and I mount it with
> fuse. I create 1 little files in mount point. Then I execute "ls"
> command. In my cluster, it takes about 3 seconds.
>
> I have a question about *fuse_getattr *function in "fuse-bridge.c" .* Why
> need we check whether nodeid is equal to 1?* , which means it is the
> mount point.  It is hard for me to get its meaning.
>
> (In my case, I find the operation of fuse_getattr takes neer half time for
> "ls", that is why I want to know what the check means. )
>
>
>
>
>
>
> I try to disable the special check, and then test "ls". It works normally
> and have a speedup 2x(about 1.3s without check). The reason is that in my
> case, "lookup" cost is much higher than "stat". Without the special check,
> getattr goes into "stat" instead of "lookup".
>
>
> Could you tell me the meaning of the special check for "nodeid == 1"?
>
glusterfs passes 'nodeid' as pointer of inode_t for an entry in all the
cases. But in case of root (which is inode number 1, and nodeid 1), we
can't pass inode pointer value, but needs to override that part of the code
to send 1 instead of pointer. Hence a separate fuse_root_lookup_cbk()
function.

Regards,
Amar



> I would appreciate it if anyone could give some tips . Thank you!
>
> Best regards,
> Zhitao Li
>
> Sent from Outlook 
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>



-- 
Amar Tumballi (amarts)
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Krutika Dhananjay
mnt-disk11-vmware2.log seems like a brick log. Could you attach the fuse
mount logs? It should be right under /var/log/glusterfs/ directory
named after the mount point name, only hyphenated.

-Krutika

On Sat, Mar 18, 2017 at 7:27 PM, Mahdi Adnan 
wrote:

> Hello Krutika,
>
>
> Kindly, check the attached logs.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> --
> *From:* Krutika Dhananjay 
> *Sent:* Saturday, March 18, 2017 3:29:03 PM
> *To:* Mahdi Adnan
> *Cc:* gluster-users@gluster.org
> *Subject:* Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption
>
> Hi Mahdi,
>
> Could you attach mount, brick and rebalance logs?
>
> -Krutika
>
> On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan 
> wrote:
>
>> Hi,
>>
>> I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure
>> in a volume contains few VMs.
>> After the completion of rebalance, i have rebooted the VMs, some of ran
>> just fine, and others just crashed.
>> Windows boot to recovery mode and Linux throw xfs errors and does not
>> boot.
>> I ran the test again and it happened just as the first one, but i have
>> noticed only VMs doing disk IOs are affected by this bug.
>> The VMs in power off mode started fine and even md5 of the disk file did
>> not change after the rebalance.
>>
>> anyone else can confirm this ?
>>
>>
>> Volume info:
>>
>> Volume Name: vmware2
>> Type: Distributed-Replicate
>> Volume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 22 x 2 = 44
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster01:/mnt/disk1/vmware2
>> Brick2: gluster03:/mnt/disk1/vmware2
>> Brick3: gluster02:/mnt/disk1/vmware2
>> Brick4: gluster04:/mnt/disk1/vmware2
>> Brick5: gluster01:/mnt/disk2/vmware2
>> Brick6: gluster03:/mnt/disk2/vmware2
>> Brick7: gluster02:/mnt/disk2/vmware2
>> Brick8: gluster04:/mnt/disk2/vmware2
>> Brick9: gluster01:/mnt/disk3/vmware2
>> Brick10: gluster03:/mnt/disk3/vmware2
>> Brick11: gluster02:/mnt/disk3/vmware2
>> Brick12: gluster04:/mnt/disk3/vmware2
>> Brick13: gluster01:/mnt/disk4/vmware2
>> Brick14: gluster03:/mnt/disk4/vmware2
>> Brick15: gluster02:/mnt/disk4/vmware2
>> Brick16: gluster04:/mnt/disk4/vmware2
>> Brick17: gluster01:/mnt/disk5/vmware2
>> Brick18: gluster03:/mnt/disk5/vmware2
>> Brick19: gluster02:/mnt/disk5/vmware2
>> Brick20: gluster04:/mnt/disk5/vmware2
>> Brick21: gluster01:/mnt/disk6/vmware2
>> Brick22: gluster03:/mnt/disk6/vmware2
>> Brick23: gluster02:/mnt/disk6/vmware2
>> Brick24: gluster04:/mnt/disk6/vmware2
>> Brick25: gluster01:/mnt/disk7/vmware2
>> Brick26: gluster03:/mnt/disk7/vmware2
>> Brick27: gluster02:/mnt/disk7/vmware2
>> Brick28: gluster04:/mnt/disk7/vmware2
>> Brick29: gluster01:/mnt/disk8/vmware2
>> Brick30: gluster03:/mnt/disk8/vmware2
>> Brick31: gluster02:/mnt/disk8/vmware2
>> Brick32: gluster04:/mnt/disk8/vmware2
>> Brick33: gluster01:/mnt/disk9/vmware2
>> Brick34: gluster03:/mnt/disk9/vmware2
>> Brick35: gluster02:/mnt/disk9/vmware2
>> Brick36: gluster04:/mnt/disk9/vmware2
>> Brick37: gluster01:/mnt/disk10/vmware2
>> Brick38: gluster03:/mnt/disk10/vmware2
>> Brick39: gluster02:/mnt/disk10/vmware2
>> Brick40: gluster04:/mnt/disk10/vmware2
>> Brick41: gluster01:/mnt/disk11/vmware2
>> Brick42: gluster03:/mnt/disk11/vmware2
>> Brick43: gluster02:/mnt/disk11/vmware2
>> Brick44: gluster04:/mnt/disk11/vmware2
>> Options Reconfigured:
>> cluster.server-quorum-type: server
>> nfs.disable: on
>> performance.readdir-ahead: on
>> transport.address-family: inet
>> performance.quick-read: off
>> performance.read-ahead: off
>> performance.io-cache: off
>> performance.stat-prefetch: off
>> cluster.eager-lock: enable
>> network.remote-dio: enable
>> features.shard: on
>> cluster.data-self-heal-algorithm: full
>> features.cache-invalidation: on
>> ganesha.enable: on
>> features.shard-block-size: 256MB
>> client.event-threads: 2
>> server.event-threads: 2
>> cluster.favorite-child-policy: size
>> storage.build-pgfid: off
>> network.ping-timeout: 5
>> cluster.enable-shared-storage: enable
>> nfs-ganesha: enable
>> cluster.server-quorum-ratio: 51%
>>
>>
>> Adding bricks:
>> gluster volume add-brick vmware2 replica 2 gluster01:/mnt/disk11/vmware2
>> gluster03:/mnt/disk11/vmware2 gluster02:/mnt/disk11/vmware2
>> gluster04:/mnt/disk11/vmware2
>>
>>
>> starting fix layout:
>> gluster volume rebalance vmware2 fix-layout start
>>
>> Starting rebalance:
>> gluster volume rebalance vmware2  start
>>
>>
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Krutika Dhananjay
On Sat, Mar 18, 2017 at 3:18 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> 2017-03-18 2:09 GMT+01:00 Lindsay Mathieson :
> > Concerning, this was supposed to be fixed in 3.8.10
>
> Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
> Now let's see how much time they require to fix another CRITICAL bug.
>
> I'm really curious.
>

Hey Gandalf!

Let's see. There have been plenty of occasions where I've sat and worked on
users' issues on weekends.
And then again, I've got a life too outside of work (or at least I'm
supposed to), you know.
(And hey you know what! Today is Saturday and I'm sitting here and
responding to your mail and collecting information
on Mahdi's issue. Nobody asked me to look into it. I checked the mail and I
had a choice to ignore it and not look into it until Monday.)

Is there a genuine problem Mahdi is facing? Without a doubt!

Got a constructive feedback to give? Please do.
Do you want to give back to the community and help improve GlusterFS? There
are plenty of ways to do that.
One of them is testing out the releases and providing feedback. Sharding
wouldn't have worked today, if not for Lindsay's timely
and regular feedback in several 3.7.x releases.

But this kind of criticism doesn't help.

Also, spending time on users' issues is only one of the many
responsibilities we have as developers.
So what you see on mailing lists is just the tip of the iceberg.

I have personally tried several times to recreate the add-brick bug on 3
machines I borrowed from Kaleb. I haven't had success in recreating it.
Reproducing VM-related bugs, in my experience, wasn't easy. I don't use
Proxmox. Lindsay and Kevin did. There are a myriad qemu options used when
launching vms. Different VM management projects (ovirt/Proxmox) use
different defaults for these options. There are too many variables to be
considered
when debugging or trying to simulate the users' test.

It's why I asked for Mahdi's help before 3.8.10 was out for feedback on the
fix:
http://lists.gluster.org/pipermail/gluster-users/2017-February/030112.html

Alright. That's all I had to say.

Happy weekend to you!

-Krutika

___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Krutika Dhananjay
Hi Mahdi,

Could you attach mount, brick and rebalance logs?

-Krutika

On Sat, Mar 18, 2017 at 12:14 AM, Mahdi Adnan 
wrote:

> Hi,
>
> I have upgraded to Gluster 3.8.10 today and ran the add-brick procedure in
> a volume contains few VMs.
> After the completion of rebalance, i have rebooted the VMs, some of ran
> just fine, and others just crashed.
> Windows boot to recovery mode and Linux throw xfs errors and does not boot.
> I ran the test again and it happened just as the first one, but i have
> noticed only VMs doing disk IOs are affected by this bug.
> The VMs in power off mode started fine and even md5 of the disk file did
> not change after the rebalance.
>
> anyone else can confirm this ?
>
>
> Volume info:
>
> Volume Name: vmware2
> Type: Distributed-Replicate
> Volume ID: 02328d46-a285-4533-aa3a-fb9bfeb688bf
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 22 x 2 = 44
> Transport-type: tcp
> Bricks:
> Brick1: gluster01:/mnt/disk1/vmware2
> Brick2: gluster03:/mnt/disk1/vmware2
> Brick3: gluster02:/mnt/disk1/vmware2
> Brick4: gluster04:/mnt/disk1/vmware2
> Brick5: gluster01:/mnt/disk2/vmware2
> Brick6: gluster03:/mnt/disk2/vmware2
> Brick7: gluster02:/mnt/disk2/vmware2
> Brick8: gluster04:/mnt/disk2/vmware2
> Brick9: gluster01:/mnt/disk3/vmware2
> Brick10: gluster03:/mnt/disk3/vmware2
> Brick11: gluster02:/mnt/disk3/vmware2
> Brick12: gluster04:/mnt/disk3/vmware2
> Brick13: gluster01:/mnt/disk4/vmware2
> Brick14: gluster03:/mnt/disk4/vmware2
> Brick15: gluster02:/mnt/disk4/vmware2
> Brick16: gluster04:/mnt/disk4/vmware2
> Brick17: gluster01:/mnt/disk5/vmware2
> Brick18: gluster03:/mnt/disk5/vmware2
> Brick19: gluster02:/mnt/disk5/vmware2
> Brick20: gluster04:/mnt/disk5/vmware2
> Brick21: gluster01:/mnt/disk6/vmware2
> Brick22: gluster03:/mnt/disk6/vmware2
> Brick23: gluster02:/mnt/disk6/vmware2
> Brick24: gluster04:/mnt/disk6/vmware2
> Brick25: gluster01:/mnt/disk7/vmware2
> Brick26: gluster03:/mnt/disk7/vmware2
> Brick27: gluster02:/mnt/disk7/vmware2
> Brick28: gluster04:/mnt/disk7/vmware2
> Brick29: gluster01:/mnt/disk8/vmware2
> Brick30: gluster03:/mnt/disk8/vmware2
> Brick31: gluster02:/mnt/disk8/vmware2
> Brick32: gluster04:/mnt/disk8/vmware2
> Brick33: gluster01:/mnt/disk9/vmware2
> Brick34: gluster03:/mnt/disk9/vmware2
> Brick35: gluster02:/mnt/disk9/vmware2
> Brick36: gluster04:/mnt/disk9/vmware2
> Brick37: gluster01:/mnt/disk10/vmware2
> Brick38: gluster03:/mnt/disk10/vmware2
> Brick39: gluster02:/mnt/disk10/vmware2
> Brick40: gluster04:/mnt/disk10/vmware2
> Brick41: gluster01:/mnt/disk11/vmware2
> Brick42: gluster03:/mnt/disk11/vmware2
> Brick43: gluster02:/mnt/disk11/vmware2
> Brick44: gluster04:/mnt/disk11/vmware2
> Options Reconfigured:
> cluster.server-quorum-type: server
> nfs.disable: on
> performance.readdir-ahead: on
> transport.address-family: inet
> performance.quick-read: off
> performance.read-ahead: off
> performance.io-cache: off
> performance.stat-prefetch: off
> cluster.eager-lock: enable
> network.remote-dio: enable
> features.shard: on
> cluster.data-self-heal-algorithm: full
> features.cache-invalidation: on
> ganesha.enable: on
> features.shard-block-size: 256MB
> client.event-threads: 2
> server.event-threads: 2
> cluster.favorite-child-policy: size
> storage.build-pgfid: off
> network.ping-timeout: 5
> cluster.enable-shared-storage: enable
> nfs-ganesha: enable
> cluster.server-quorum-ratio: 51%
>
>
> Adding bricks:
> gluster volume add-brick vmware2 replica 2 gluster01:/mnt/disk11/vmware2
> gluster03:/mnt/disk11/vmware2 gluster02:/mnt/disk11/vmware2
> gluster04:/mnt/disk11/vmware2
>
>
> starting fix layout:
> gluster volume rebalance vmware2 fix-layout start
>
> Starting rebalance:
> gluster volume rebalance vmware2  start
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Mahdi Adnan
Although i have tested the patch before it got released, but apparently it 
was't a thorough test.

In Gluster 3.7.x i lost around 100 VMs, now in 3.8.x i just lost a few test VMs.

I hope there will be a fix soon.


--

Respectfully
Mahdi A. Mahdi


From: gluster-users-boun...@gluster.org  on 
behalf of Gandalf Corvotempesta 
Sent: Saturday, March 18, 2017 12:48:33 PM
To: Lindsay Mathieson
Cc: gluster-users
Subject: Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 2:09 GMT+01:00 Lindsay Mathieson :
> Concerning, this was supposed to be fixed in 3.8.10

Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
Now let's see how much time they require to fix another CRITICAL bug.

I'm really curious.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Gluster 3.8.10 rebalance VMs corruption

2017-03-18 Thread Gandalf Corvotempesta
2017-03-18 2:09 GMT+01:00 Lindsay Mathieson :
> Concerning, this was supposed to be fixed in 3.8.10

Exactly. https://bugzilla.redhat.com/show_bug.cgi?id=1387878
Now let's see how much time they require to fix another CRITICAL bug.

I'm really curious.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users