Re: [Gluster-users] Questions about healing

2016-05-20 Thread Alastair Neil
Well it's not magic, there is an algorithm that is documented and it is trivial script the recreation of the file from the shards if gluster was truly unavailable: > > > #!/bin/bash > # > # quick and dirty reconstruct file from shards > # takes brick path and file name as arguments > # Copyright

Re: [Gluster-users] Questions about healing

2016-05-20 Thread Gandalf Corvotempesta
Il 20 mag 2016 20:14, "Alastair Neil" ha scritto: > > I think you are confused about what sharding does. In a sharded replica 3 volume all the shards exist on all the replicas so there is no distribution. Might you be getting confused with erasure coding? The upshot of

Re: [Gluster-users] Questions about healing

2016-05-20 Thread Alastair Neil
I think you are confused about what sharding does. In a sharded replica 3 volume all the shards exist on all the replicas so there is no distribution. Might you be getting confused with erasure coding? The upshot of sharding is that if you have a failure, instead of healing multiple gigabyte

Re: [Gluster-users] Gluster logs filling the disk.

2016-05-20 Thread Serkan Çoban
This is a bug which will be fixed in 3.7.12. You an try to set log level to WARNING to get rid of it.. On Fri, May 20, 2016 at 7:18 PM, Ernie Dunbar wrote: > We had one of our gluster servers in the cluster fail on us yesterday, and > now one (and only one) of the other

[Gluster-users] CentOS 7.2 + Gluster 3.6.3 volume stuck in heal

2016-05-20 Thread Kingsley
Hi, We've got a volume that has been stuck in "Possibly undergoing heal" status for several months. I would like to upgrade gluster to a newer version but I would feel safer if we could get the volume fixed first. Some info: # gluster volume info voicemail Volume Name: voicemail Type:

[Gluster-users] Gluster logs filling the disk.

2016-05-20 Thread Ernie Dunbar
We had one of our gluster servers in the cluster fail on us yesterday, and now one (and only one) of the other servers in the cluster has managed to collect about 7 gigabytes of logs in the past 12 hours, seemingly only with lines like this: [2016-05-20 16:08:05.119529] I

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread Atin Mukherjee
-Atin Sent from one plus one On 20-May-2016 5:34 PM, "ABHISHEK PALIWAL" wrote: > > Actually we have some other files related to system initial configuration for that we > need to format the volume where these bricks are also created and after this we are > facing some

Re: [Gluster-users] [Gluster-devel] Idea: Alternate Release process

2016-05-20 Thread Shyam
On 05/19/2016 10:25 PM, Pranith Kumar Karampuri wrote: Once every 3 months i.e. option 3 sounds good to me. +1 from my end. Every 2 months seems to be a bit too much, 4 months is still fine, but gives us 1 in 3 to pick the LTS, I like 1:4 odds better for the LTS, hence the 3 months (or

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread ABHISHEK PALIWAL
Actually we have some other files related to system initial configuration for that we need to format the volume where these bricks are also created and after this we are facing some abnormal behavior in gluster and some failure logs like volume ID mismatch something. That is why I am asking this

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread Atin Mukherjee
And most importantly why would you do that? What's your use case Abhishek? On 05/20/2016 05:03 PM, Lindsay Mathieson wrote: > On 20/05/2016 8:37 PM, ABHISHEK PALIWAL wrote: >> I am not getting any failure and after restart the glusterd when I run >> volume info command it creates the brick

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread Lindsay Mathieson
On 20/05/2016 8:37 PM, ABHISHEK PALIWAL wrote: I am not getting any failure and after restart the glusterd when I run volume info command it creates the brick directory as well as .glsuterfs (xattrs). but some time even after restart the glusterd, volume info command showing no volume

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread ABHISHEK PALIWAL
I am not getting any failure and after restart the glusterd when I run volume info command it creates the brick directory as well as .glsuterfs (xattrs). but some time even after restart the glusterd, volume info command showing no volume present. Could you please tell me why this unpredictable

Re: [Gluster-users] [Gluster-devel] Query!

2016-05-20 Thread Kaushal M
This would erase the xattrs set on the brick root (volume-id), which identify it as a brick. Brick processes will fail to start when this xattr isn't present. On Fri, May 20, 2016 at 3:42 PM, ABHISHEK PALIWAL wrote: > Hi > > What will happen if we format the volume

[Gluster-users] Query!

2016-05-20 Thread ABHISHEK PALIWAL
Hi What will happen if we format the volume where the bricks of replicate gluster volume's are created and restart the glusterd on both node. It will work fine or in this case need to remove /var/lib/glusterd directory as well. -- Regards Abhishek Paliwal

[Gluster-users] 答复: 答复: 答复: 答复: geo-replication status partial faulty

2016-05-20 Thread vyyy杨雨阳
Hello, Kotresh I 'create force', but still some nodes work ,some nodes faulty. On faulty nodes etc-glusterfs-glusterd.vol.log shown: [2016-05-20 06:27:03.260870] I [glusterd-geo-rep.c:3516:glusterd_read_status_file] 0-: Using passed config