>Anyway how is possible to keep VM up and running when healing is happening
>on a shard? That part of disk image is not accessible and thus the VM
>could have some issue on a filesystem.
Yeah, but healing a few MB shard takes a few second, so the VM is frozen for a
very small
amount o
Well it's not magic, there is an algorithm that is documented and it is
trivial script the recreation of the file from the shards if gluster was
truly unavailable:
>
>
> #!/bin/bash
> #
> # quick and dirty reconstruct file from shards
> # takes brick path and file name as arguments
> # Copyright Ma
Il 20 mag 2016 20:14, "Alastair Neil" ha scritto:
>
> I think you are confused about what sharding does. In a sharded replica
3 volume all the shards exist on all the replicas so there is no
distribution. Might you be getting confused with erasure coding? The
upshot of sharding is that if you
I think you are confused about what sharding does. In a sharded replica 3
volume all the shards exist on all the replicas so there is no
distribution. Might you be getting confused with erasure coding? The
upshot of sharding is that if you have a failure, instead of healing
multiple gigabyte vm
Hi all,
I'm wondering if it is ok to use a volume during rebalance (Gluster
3.6.9). Currently I'm preparing to expand a distributed replicated
volume by 2 nodes (replica is 2).
I'm currently testing with a virtual machine setup. If I do heavy copy
operation during rebalance after the 2 nodes are
This is a bug which will be fixed in 3.7.12. You an try to set log
level to WARNING to get rid of it..
On Fri, May 20, 2016 at 7:18 PM, Ernie Dunbar wrote:
> We had one of our gluster servers in the cluster fail on us yesterday, and
> now one (and only one) of the other servers in the cluster has
Hi,
We've got a volume that has been stuck in "Possibly undergoing heal"
status for several months. I would like to upgrade gluster to a newer
version but I would feel safer if we could get the volume fixed first.
Some info:
# gluster volume info voicemail
Volume Name: voicemail
Type: Replicate
We had one of our gluster servers in the cluster fail on us yesterday,
and now one (and only one) of the other servers in the cluster has
managed to collect about 7 gigabytes of logs in the past 12 hours,
seemingly only with lines like this:
[2016-05-20 16:08:05.119529] I [dict.c:473:dict_get]
-Atin
Sent from one plus one
On 20-May-2016 5:34 PM, "ABHISHEK PALIWAL" wrote:
>
> Actually we have some other files related to system initial configuration
for that we
> need to format the volume where these bricks are also created and after
this we are
> facing some abnormal behavior in gluster
Hi,
Did anyone get a chance to check this. We are intermittently receiving
corrupted data in read operations because of this.
Thanks and Regards,
Ram
From: gluster-users-boun...@gluster.org
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Ankireddypalle Reddy
Sent: Thursday, May
On 05/19/2016 10:25 PM, Pranith Kumar Karampuri wrote:
Once every 3 months i.e. option 3 sounds good to me.
+1 from my end.
Every 2 months seems to be a bit too much, 4 months is still fine, but
gives us 1 in 3 to pick the LTS, I like 1:4 odds better for the LTS,
hence the 3 months (or 'alte
Actually we have some other files related to system initial configuration
for that we
need to format the volume where these bricks are also created and after
this we are
facing some abnormal behavior in gluster and some failure logs like volume
ID mismatch something.
That is why I am asking this i
And most importantly why would you do that? What's your use case Abhishek?
On 05/20/2016 05:03 PM, Lindsay Mathieson wrote:
> On 20/05/2016 8:37 PM, ABHISHEK PALIWAL wrote:
>> I am not getting any failure and after restart the glusterd when I run
>> volume info command it creates the brick directo
On 20/05/2016 8:37 PM, ABHISHEK PALIWAL wrote:
I am not getting any failure and after restart the glusterd when I run
volume info command it creates the brick directory
as well as .glsuterfs (xattrs).
but some time even after restart the glusterd, volume info command
showing no volume present.
I am not getting any failure and after restart the glusterd when I run
volume info command it creates the brick directory
as well as .glsuterfs (xattrs).
but some time even after restart the glusterd, volume info command showing
no volume present.
Could you please tell me why this unpredictable p
This would erase the xattrs set on the brick root (volume-id), which
identify it as a brick. Brick processes will fail to start when this
xattr isn't present.
On Fri, May 20, 2016 at 3:42 PM, ABHISHEK PALIWAL
wrote:
> Hi
>
> What will happen if we format the volume where the bricks of replicate
Hi
What will happen if we format the volume where the bricks of replicate
gluster volume's are created and restart the glusterd on both node.
It will work fine or in this case need to remove /var/lib/glusterd
directory as well.
--
Regards
Abhishek Paliwal
___
Hello, Kotresh
I 'create force', but still some nodes work ,some nodes faulty.
On faulty nodes
etc-glusterfs-glusterd.vol.log shown:
[2016-05-20 06:27:03.260870] I
[glusterd-geo-rep.c:3516:glusterd_read_status_file] 0-: Using passed config
template(/var/lib/glusterd/geo-replication/filews_glus
18 matches
Mail list logo