Re: [Gluster-users] Volume rebalance issue

2017-02-26 Thread Mahdi Adnan

I created a distributed replicated volume, set the group virt, enabled 
sharding, migrated a few VMs to the volume, after that i added more bricks to 
the volume and started the rebalance, i checked the VMs and it was corrupted.

And yes, what you suggested about Gluster is on point, i think we need to have 
more bug fixes and performance enhancements.

Im going to deploy a test Gluster soon just to test patches and updates and 
report back bugs and issues.

--

Respectfully
Mahdi A. Mahdi


From: Gandalf Corvotempesta 
Sent: Sunday, February 26, 2017 11:07:04 AM
To: Mahdi Adnan
Cc: gluster-users@gluster.org
Subject: Re: [Gluster-users] Volume rebalance issue

How did you replicate the issue?
Next week I'll spin up a gluster storage and I would like to try the same to 
see the corruption and to test any patches from gluster

Il 25 feb 2017 4:31 PM, "Mahdi Adnan" 
mailto:mahdi.ad...@outlook.com>> ha scritto:

Hi,


We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs 
for ESXi, i tried expanding the volume with 8 more bricks, and after 
rebalancing the volume, the VMs got corrupted.

Gluster version is 3.8.9 and the volume is using the default parameters of 
group "virt" plus sharding.

I created a new volume without sharding and got the same issue after the 
rebalance.

I checked the reported bugs and the mailing list, and i noticed it's a bug in 
Gluster.

Is it affecting all of Gluster versions ? is there any workaround or a volume 
setup that is not affected by this issue ?


Thank you.

--

Respectfully
Mahdi A. Mahdi


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Volume rebalance issue

2017-02-26 Thread Kevin Lemonnier
> We fixed this (thanks to Satheesaran for recreating the issue and to
> Raghavendra G and Pranith for the RCA) as recently as last week.
> The bug was in DHT-shard interaction.

Ah, that's a great news !
I'll give the next relaeses a try for our next cluster then, thanks for
the info.

-- 
Kevin Lemonnier
PGP Fingerprint : 89A5 2283 04A0 E6E9 0111


signature.asc
Description: Digital signature
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Volume rebalance issue

2017-02-26 Thread Mahdi Adnan
Hi,


Yes, i would love to try it out.

Steps to apply the patch would be highly appreciated.



--

Respectfully
Mahdi A. Mahdi


From: Krutika Dhananjay 
Sent: Sunday, February 26, 2017 5:37:11 PM
To: Mahdi Adnan
Cc: gluster-users@gluster.org; Kevin Lemonnier; Gandalf Corvotempesta; Lindsay 
Mathieson; David Gossage
Subject: Re: [Gluster-users] Volume rebalance issue

Hi,

We fixed this (thanks to Satheesaran for recreating the issue and to 
Raghavendra G and Pranith for the RCA) as recently as last week.
The bug was in DHT-shard interaction.

The patches are https://review.gluster.org/#/c/16709/ followed by 
https://review.gluster.org/#/c/14419 to be applied in that order.

Do you mind giving these a try before it makes it into the next .x releases of 
3.8, 3.9 and 3.10?
I could make the src tarball with these patches applied if you like.

-Krutika

On Sat, Feb 25, 2017 at 8:56 PM, Mahdi Adnan 
mailto:mahdi.ad...@outlook.com>> wrote:

Hi,


We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs 
for ESXi, i tried expanding the volume with 8 more bricks, and after 
rebalancing the volume, the VMs got corrupted.

Gluster version is 3.8.9 and the volume is using the default parameters of 
group "virt" plus sharding.

I created a new volume without sharding and got the same issue after the 
rebalance.

I checked the reported bugs and the mailing list, and i noticed it's a bug in 
Gluster.

Is it affecting all of Gluster versions ? is there any workaround or a volume 
setup that is not affected by this issue ?


Thank you.

--

Respectfully
Mahdi A. Mahdi


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] managing lifecycle of a gluster volume in openshift

2017-02-26 Thread Joseph Lorenzini
Hi all,

I am happy to report that I finally got a container in an openshift pod to
mount a gluster volume successfully. This has nothing to do with gluster,
which works fine, and everything to do with openshift interfaces being less
than ideal. Note to self: turn off the settings in openshift that prevent
containers from running as root (what a silly restriction).

So now I need to tackle a much more complicated problem. How to handle the
lifecycle of a gluster volume in openshift

Here are the things I am considering and I'd be interested to see how
others have addressed this problem. Lets assume for the purposes of this
conversation we have a single gluster cluster that will use a replicated
volumes with a replica count of 3 (nothing is distributed) and the cluster
consists of three nodes. So each volume minimally has three bricks, where
each server only has one of the bricks. Total available disk throughput
gluster is 1 terabyte.


   - do you use a single gluster volume for multiple pods or one gluster
   volume for each pod Until gluster supports mounting a subdirectory in a
   volume natively (can't wait for that feature!!), it seems like you'd want
   to go the route of volume per pod for reasons of multi-tenancy and security.
   -  if you do a gluster volume per pod, how do you handle the physical
   storage that backs the gluster cluster? For example, lets say each gluster
   server has three devices (/dev/sdb,/dev/sdc,/dev/sdd) that can be used by
   bricks. Would it be a good idea to create a volume for each openshift pod,
   where there are multiple brick processes writing to the same device on the
   same disk for different volumes? Or would that have unacceptable
   performance implications? The reason I ask is that the gluster docs seems
   to recommend having physical devices dedicated to a single volume only.


I did take a look at heketi but have a variety of concerns/questions on
that, which are probably more appropriate for whatever email list discusses
heketi.

https://github.com/screeley44/openshift-docs/blob/
ce684e3c4c581db3b4aa27ecc1dba2ea65f51eda/install_config/
storage_examples/external_gluster_dynamic_example.adoc

Thanks,
Joe
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] georeplication , backups and consistency

2017-02-26 Thread Gandalf Corvotempesta
Is possible to use georep as backups?
rsyncing every night a 100TB cluster is impossible. Georep could do
the same by syncing h24 in almost realtime, to a different host.

But what about data consistency ? What if i'm running VMs stored on gluster?
In case of disaster, can I restore from georep or data would be inconsistent?
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


Re: [Gluster-users] Volume rebalance issue

2017-02-26 Thread Krutika Dhananjay
Hi,

We fixed this (thanks to Satheesaran for recreating the issue and to
Raghavendra G and Pranith for the RCA) as recently as last week.
The bug was in DHT-shard interaction.

The patches are https://review.gluster.org/#/c/16709/ followed by
https://review.gluster.org/#/c/14419 to be applied in that order.

Do you mind giving these a try before it makes it into the next .x releases
of 3.8, 3.9 and 3.10?
I could make the src tarball with these patches applied if you like.

-Krutika

On Sat, Feb 25, 2017 at 8:56 PM, Mahdi Adnan 
wrote:

> Hi,
>
>
> We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting
> VMs for ESXi, i tried expanding the volume with 8 more bricks, and after
> rebalancing the volume, the VMs got corrupted.
>
> Gluster version is 3.8.9 and the volume is using the default parameters of
> group "virt" plus sharding.
>
> I created a new volume without sharding and got the same issue after the
> rebalance.
>
> I checked the reported bugs and the mailing list, and i noticed it's a bug
> in Gluster.
>
> Is it affecting all of Gluster versions ? is there any workaround or a
> volume setup that is not affected by this issue ?
>
>
> Thank you.
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Volume rebalance issue

2017-02-26 Thread Gandalf Corvotempesta
How did you replicate the issue?
Next week I'll spin up a gluster storage and I would like to try the same
to see the corruption and to test any patches from gluster

Il 25 feb 2017 4:31 PM, "Mahdi Adnan"  ha scritto:

Hi,


We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting
VMs for ESXi, i tried expanding the volume with 8 more bricks, and after
rebalancing the volume, the VMs got corrupted.

Gluster version is 3.8.9 and the volume is using the default parameters of
group "virt" plus sharding.

I created a new volume without sharding and got the same issue after the
rebalance.

I checked the reported bugs and the mailing list, and i noticed it's a bug
in Gluster.

Is it affecting all of Gluster versions ? is there any workaround or a
volume setup that is not affected by this issue ?


Thank you.

-- 

Respectfully
*Mahdi A. Mahdi*


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users