I thought self healing is possible only after we run "ls -alR or find
.." . It looks self healing is supposed to work when a dead node is
brought up, is that true?

On Tue, Mar 15, 2011 at 6:07 AM, Pranith Kumar. Karampuri
<prani...@gluster.com> wrote:
> hi R.C.,
>    Could you please give the exact steps when you log the bug. Please also 
> give the output of gluster peer status on both the machines after restart. 
> zip the files under /usr/local/var/log/glusterfs/ and /etc/glusterd on both 
> the machines when this issue happens. This should help us debug the issue.
>
> Thanks
> Pranith.
>
> ----- Original Message -----
> From: "R.C." <milan...@gmail.com>
> To: gluster-users@gluster.org
> Sent: Tuesday, March 15, 2011 4:14:24 PM
> Subject: Re: [Gluster-users] Best practices after a peer failure?
>
> I've figured out the problem.
>
> If you mount the glusterfs with native client on a peer, if another peer
> crashes then doesn't self-heal after reboot.
>
> Should I put this issue in the bug tracker?
>
> Bye
>
> Raf
>
>
> ----- Original Message -----
> From: "R.C." <milan...@gmail.com>
> To: <gluster-users@gluster.org>
> Sent: Monday, March 14, 2011 11:41 PM
> Subject: Best practices after a peer failure?
>
>
>> Hello to the list.
>>
>> I'm practicing GlusterFS in various topologies by means of multiple
>> Virtualbox VMs.
>>
>> As the standard system administrator, I'm mainly interested in disaster
>> recovery scenarios. The first being a replica 2 configuration, with one
>> peer crashing (actually stopping VM abruptly) during data writing to the
>> volume.
>> After rebooting the stopped VM and relaunching the gluster deamon (service
>> glusterd start), the cluster doesn't start healing by itself.
>> I've also tried the suggested commands:
>> find <gluster-mount> -print0 | xargs --null stat >/dev/null
>> and
>> find <gluster-mount> -type f -exec dd if='{}' of=/dev/null bs=1M \; >
>> /dev/null 2>&1
>> without success.
>> A rebalance command recreates replicas but, when accessing cluster, the
>> always-alive client is the only one committing data to disk.
>>
>> Where am I misoperating?
>>
>> Thank you for your support.
>>
>> Raf
>>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
_______________________________________________
Gluster-users mailing list
Gluster-users@gluster.org
http://gluster.org/cgi-bin/mailman/listinfo/gluster-users

Reply via email to