[Gluster-users] self-heal daemon not running

2014-11-09 Thread Demeter Tibor

Hi, 

I have a 2 node replicated volume under ovirt 3.5. 



My self heal daemon is not running. I have a lot of misshealted vms on my 
glusterfs 



[root@node1 ~]# gluster volume heal g1sata info 
Brick node0.itsmart.cloud:/data/sata/brick/ 
 
Number of entries: 1 

Brick node1.itsmart.cloud:/data/sata/brick/ 
 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/6788e53a-750d-4566-8579-37f586a0f306/2f62334e-39dc-4ffa-9102-51289588c42b
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/12ff021d-4075-4032-979c-685520dc1895/4051ffec-3dd2-495d-989b-eefb9fe92221
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/c9dbc63e-b9a2-43aa-b433-8c53ce824492/bb0efb35-5164-4b22-9bed-5daeacf97129
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/388c14f5-5690-4eae-a7dc-76d782ad8acc/0059a2c2-f8b1-4979-8321-41422d9a469f
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/2cb7ee4b-5c43-45e7-b13e-18aa3df0ef66/c0cd0554-ac37-4feb-803c-d1207219e3a1
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/1bb441b8-84a2-4d5b-bd29-f57b100bbce4/095230c2-0411-44cf-a085-3c929e4ca9b6
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/e3751092-3f6a-4aa6-b569-2a2fb4ae294a/133b2d17-2a2a-4ec3-b26a-4fd685aa2b78
 - Possibly undergoing heal 
/fbfc7c67-ae12-4779-a5f0-42d32a3f6248/images/1535497b-d6ca-40e3-84b0-85f55217cbc9/144ddc5c-be25-4d5e-91a4-a0864ea2a10e
 - Possibly undergoing heal 
Number of entries: 9 




Status of volume: g1sata 
Gluster process Port Online Pid 
-- 
Brick 172.16.0.10:/data/sata/brick 49152 Y 27983 
Brick 172.16.0.11:/data/sata/brick 49152 Y 2581 
NFS Server on localhost 2049 Y 14209 
Self-heal Daemon on localhost N/A Y 14225 
NFS Server on 172.16.0.10 2049 Y 27996 
Self-heal Daemon on 172.16.0.10 N/A Y 28004 

Task Status of Volume g1sata 
-- 
There are no active volume tasks 





[root@node1 ~]# rpm -qa|grep gluster 
glusterfs-libs-3.5.2-1.el6.x86_64 
glusterfs-cli-3.5.2-1.el6.x86_64 
glusterfs-rdma-3.5.2-1.el6.x86_64 
glusterfs-server-3.5.2-1.el6.x86_64 
glusterfs-3.5.2-1.el6.x86_64 
glusterfs-api-3.5.2-1.el6.x86_64 
glusterfs-fuse-3.5.2-1.el6.x86_64 
vdsm-gluster-4.16.7-1.gitdb83943.el6.noarch 




centos 6.5 , firewall is disabled, selinux is on permissive 







I did a service restart on each node but that isn't helped. 




Also I have split-brained 




could someone help me? 

Thanks 




Tibor 


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-Heal Daemon not Running

2013-09-25 Thread Ravishankar N

On 09/25/2013 01:06 PM, Andrew Lau wrote:
On Wed, Sep 25, 2013 at 2:28 PM, Ravishankar N >wrote:


On 09/25/2013 06:16 AM, Andrew Lau wrote:

That's where I found the 200+ entries

[ root@hv01 ]gluster volume heal STORAGE info split-brain
Gathering Heal info on volume STORAGE has been successful

Brick hv01:/data1
Number of entries: 271
atpath on brick

2013-09-25 00:04:29 /6682d31f-39ce-4896-99ef-14e1c9682585/dom_md/ids
2013-09-25 00:04:29

/6682d31f-39ce-4896-99ef-14e1c9682585/images/5599c7c7-0c25-459a-9d7d-80190a7c739b/0593d351-2ab1-49cd-a9b6-c94c897ebcc7
2013-09-24 23:54:29 
2013-09-24 23:54:29 


Brick hv02:/data1
Number of entries: 0

When I run the same command on hv02, it will show the reverse
(the other node having 0 entries).

I remember last time having to delete these files individually on
another split-brain case, but I was hoping there was a better
solution than going through 200+ entries.


While I haven't tried it out myself, Jeff Darcy has written a
script
(https://github.com/jdarcy/glusterfs/tree/heal-script/extras/heal_script)
which helps in automating the process. He has detailed it's usage
in his blog post
http://hekafs.org/index.php/2012/06/healing-split-brain/

Hope this helps.
-Ravi


That didn't end up working, ImportError: No module named volfilter


Oh, you need to download all 4 python scripts in the heal_script folder.
But I didn't end up spending much time with it as the number of 
entries magically reduced to 10, I removed the files and the 
split-brain info reports 0 entries. Still wondering why there's 
different file sizes on the two bricks.




Cheers.


On Wed, Sep 25, 2013 at 10:39 AM, Mohit Anchlia
mailto:mohitanch...@gmail.com>> wrote:

What's the output of
|gluster volume heal $VOLUME info ||split||-brain|


On Tue, Sep 24, 2013 at 5:33 PM, Andrew Lau
mailto:and...@andrewklau.com>> wrote:

Found the BZ
https://bugzilla.redhat.com/show_bug.cgi?id=960190 - so I
restarted one of the volumes and it seems to have
restarted the all daemons again.

Self heal started again, but I seem to have split-brain
issues everywhere. There's over 100 different entries on
each node, what's the best way to restore this now? Short
of having to manually go through and delete 200+ files.
It looks like a full split brain as the file sizes on the
different nodes are out of balance by about 100GB or so.

Any suggestions would be much appreciated!

Cheers.

On Tue, Sep 24, 2013 at 10:32 PM, Andrew Lau
mailto:and...@andrewklau.com>> wrote:

Hi,

Right now, I have a 2x1 replica. Ever since I had to
reinstall one of the gluster servers, there's been
issues with split-brain. The self-heal daemon doesn't
seem to be running on either of the nodes.

To reinstall the gluster server (the original brick
data was intact but the OS had to be reinstalled)
- Reinstalled gluster
- Copied over the old uuid from backup
- gluster peer probe
- gluster volume sync $othernode all
- mount -t glusterfs localhost:STORAGE /mnt
- find /mnt -noleaf -print0 | xargs --null stat
>/dev/null 2>/var/log/glusterfs/mnt-selfheal.log

I let it resync and it was working fine, atleast so I
thought. I just came back a few days later to see
there's a miss match in the brick volumes. One is
50GB ahead of the other.

# gluster volume heal STORAGE info
Status: self-heal-daemon is not running on
966456a1-b8a6-4ca8-9da7-d0eb96997cbe

/var/log/gluster/glustershd.log doesn't seem to have
any recent logs, only those from when the two
original gluster servers were running.

# gluster volume status

Self-heal Daemon on localhostN/ANN/A

Any suggestions would be much appreciated!

Cheers
Andrew.





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-Heal Daemon not Running

2013-09-25 Thread Andrew Lau
On Wed, Sep 25, 2013 at 2:28 PM, Ravishankar N wrote:

>  On 09/25/2013 06:16 AM, Andrew Lau wrote:
>
>  That's where I found the 200+ entries
>
>  [ root@hv01 ]gluster volume heal STORAGE info split-brain
> Gathering Heal info on volume STORAGE has been successful
>
>  Brick hv01:/data1
> Number of entries: 271
> atpath on brick
>
>   2013-09-25 00:04:29 /6682d31f-39ce-4896-99ef-14e1c9682585/dom_md/ids
> 2013-09-25 00:04:29
> /6682d31f-39ce-4896-99ef-14e1c9682585/images/5599c7c7-0c25-459a-9d7d-80190a7c739b/0593d351-2ab1-49cd-a9b6-c94c897ebcc7
>  2013-09-24 23:54:29 
> 2013-09-24 23:54:29 
> 
>
>   Brick hv02:/data1
> Number of entries: 0
>
>  When I run the same command on hv02, it will show the reverse (the other
> node having 0 entries).
>
>  I remember last time having to delete these files individually on
> another split-brain case, but I was hoping there was a better solution than
> going through 200+ entries.
>
>   While I haven't tried it out myself, Jeff Darcy has written a script (
> https://github.com/jdarcy/glusterfs/tree/heal-script/extras/heal_script)
> which helps in automating the process. He has detailed it's usage in his
> blog post http://hekafs.org/index.php/2012/06/healing-split-brain/
>
> Hope this helps.
> -Ravi
>

That didn't end up working, ImportError: No module named volfilter

But I didn't end up spending much time with it as the number of entries
magically reduced to 10, I removed the files and the split-brain info
reports 0 entries. Still wondering why there's different file sizes on the
two bricks.

>
>   Cheers.
>
>
>  On Wed, Sep 25, 2013 at 10:39 AM, Mohit Anchlia 
> wrote:
>
>> What's the output of
>>
>>  gluster volume heal $VOLUME info split-brain
>>
>>
>>   On Tue, Sep 24, 2013 at 5:33 PM, Andrew Lau wrote:
>>
>>>   Found the BZ https://bugzilla.redhat.com/show_bug.cgi?id=960190 - so
>>> I restarted one of the volumes and it seems to have restarted the all
>>> daemons again.
>>>
>>>  Self heal started again, but I seem to have split-brain issues
>>> everywhere. There's over 100 different entries on each node, what's the
>>> best way to restore this now? Short of having to manually go through and
>>> delete 200+ files. It looks like a full split brain as the file sizes on
>>> the different nodes are out of balance by about 100GB or so.
>>>
>>>  Any suggestions would be much appreciated!
>>>
>>>  Cheers.
>>>
>>> On Tue, Sep 24, 2013 at 10:32 PM, Andrew Lau wrote:
>>>
  Hi,

  Right now, I have a 2x1 replica. Ever since I had to reinstall one of
 the gluster servers, there's been issues with split-brain. The self-heal
 daemon doesn't seem to be running on either of the nodes.

  To reinstall the gluster server (the original brick data was intact
 but the OS had to be reinstalled)
  - Reinstalled gluster
 - Copied over the old uuid from backup
 - gluster peer probe
 - gluster volume sync $othernode all
 - mount -t glusterfs localhost:STORAGE /mnt
 - find /mnt -noleaf -print0 | xargs --null stat >/dev/null
 2>/var/log/glusterfs/mnt-selfheal.log

  I let it resync and it was working fine, atleast so I thought. I just
 came back a few days later to see there's a miss match in the brick
 volumes. One is 50GB ahead of the other.

  # gluster volume heal STORAGE info
 Status: self-heal-daemon is not running on
 966456a1-b8a6-4ca8-9da7-d0eb96997cbe

  /var/log/gluster/glustershd.log doesn't seem to have any recent logs,
 only those from when the two original gluster servers were running.

  # gluster volume status

  Self-heal Daemon on localhost N/A N N/A

  Any suggestions would be much appreciated!

  Cheers
  Andrew.

>>>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-Heal Daemon not Running

2013-09-24 Thread Ravishankar N

On 09/25/2013 06:16 AM, Andrew Lau wrote:

That's where I found the 200+ entries

[ root@hv01 ]gluster volume heal STORAGE info split-brain
Gathering Heal info on volume STORAGE has been successful

Brick hv01:/data1
Number of entries: 271
atpath on brick

2013-09-25 00:04:29 /6682d31f-39ce-4896-99ef-14e1c9682585/dom_md/ids
2013-09-25 00:04:29 
/6682d31f-39ce-4896-99ef-14e1c9682585/images/5599c7c7-0c25-459a-9d7d-80190a7c739b/0593d351-2ab1-49cd-a9b6-c94c897ebcc7

2013-09-24 23:54:29 
2013-09-24 23:54:29 


Brick hv02:/data1
Number of entries: 0

When I run the same command on hv02, it will show the reverse (the 
other node having 0 entries).


I remember last time having to delete these files individually on 
another split-brain case, but I was hoping there was a better solution 
than going through 200+ entries.


While I haven't tried it out myself, Jeff Darcy has written a script 
(https://github.com/jdarcy/glusterfs/tree/heal-script/extras/heal_script) which 
helps in automating the process. He has detailed it's usage in his blog 
post http://hekafs.org/index.php/2012/06/healing-split-brain/


Hope this helps.
-Ravi

Cheers.


On Wed, Sep 25, 2013 at 10:39 AM, Mohit Anchlia 
mailto:mohitanch...@gmail.com>> wrote:


What's the output of
|gluster volume heal $VOLUME info ||split||-brain|


On Tue, Sep 24, 2013 at 5:33 PM, Andrew Lau mailto:and...@andrewklau.com>> wrote:

Found the BZ
https://bugzilla.redhat.com/show_bug.cgi?id=960190 - so I
restarted one of the volumes and it seems to have restarted
the all daemons again.

Self heal started again, but I seem to have split-brain issues
everywhere. There's over 100 different entries on each node,
what's the best way to restore this now? Short of having to
manually go through and delete 200+ files. It looks like a
full split brain as the file sizes on the different nodes are
out of balance by about 100GB or so.

Any suggestions would be much appreciated!

Cheers.

On Tue, Sep 24, 2013 at 10:32 PM, Andrew Lau
mailto:and...@andrewklau.com>> wrote:

Hi,

Right now, I have a 2x1 replica. Ever since I had to
reinstall one of the gluster servers, there's been issues
with split-brain. The self-heal daemon doesn't seem to be
running on either of the nodes.

To reinstall the gluster server (the original brick data
was intact but the OS had to be reinstalled)
- Reinstalled gluster
- Copied over the old uuid from backup
- gluster peer probe
- gluster volume sync $othernode all
- mount -t glusterfs localhost:STORAGE /mnt
- find /mnt -noleaf -print0 | xargs --null stat >/dev/null
2>/var/log/glusterfs/mnt-selfheal.log

I let it resync and it was working fine, atleast so I
thought. I just came back a few days later to see there's
a miss match in the brick volumes. One is 50GB ahead of
the other.

# gluster volume heal STORAGE info
Status: self-heal-daemon is not running on
966456a1-b8a6-4ca8-9da7-d0eb96997cbe

/var/log/gluster/glustershd.log doesn't seem to have any
recent logs, only those from when the two original gluster
servers were running.

# gluster volume status

Self-heal Daemon on localhostN/ANN/A

Any suggestions would be much appreciated!

Cheers
Andrew.



___
Gluster-users mailing list
Gluster-users@gluster.org 
http://supercolony.gluster.org/mailman/listinfo/gluster-users





___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-Heal Daemon not Running

2013-09-24 Thread Andrew Lau
That's where I found the 200+ entries

[ root@hv01 ]gluster volume heal STORAGE info split-brain
Gathering Heal info on volume STORAGE has been successful

Brick hv01:/data1
Number of entries: 271
atpath on brick

2013-09-25 00:04:29 /6682d31f-39ce-4896-99ef-14e1c9682585/dom_md/ids
2013-09-25 00:04:29
/6682d31f-39ce-4896-99ef-14e1c9682585/images/5599c7c7-0c25-459a-9d7d-80190a7c739b/0593d351-2ab1-49cd-a9b6-c94c897ebcc7
2013-09-24 23:54:29 
2013-09-24 23:54:29 


Brick hv02:/data1
Number of entries: 0

When I run the same command on hv02, it will show the reverse (the other
node having 0 entries).

I remember last time having to delete these files individually on another
split-brain case, but I was hoping there was a better solution than going
through 200+ entries.

Cheers.


On Wed, Sep 25, 2013 at 10:39 AM, Mohit Anchlia wrote:

> What's the output of
>
> gluster volume heal $VOLUME info split-brain
>
>
> On Tue, Sep 24, 2013 at 5:33 PM, Andrew Lau  wrote:
>
>> Found the BZ https://bugzilla.redhat.com/show_bug.cgi?id=960190 - so I
>> restarted one of the volumes and it seems to have restarted the all daemons
>> again.
>>
>> Self heal started again, but I seem to have split-brain issues
>> everywhere. There's over 100 different entries on each node, what's the
>> best way to restore this now? Short of having to manually go through and
>> delete 200+ files. It looks like a full split brain as the file sizes on
>> the different nodes are out of balance by about 100GB or so.
>>
>> Any suggestions would be much appreciated!
>>
>> Cheers.
>>
>> On Tue, Sep 24, 2013 at 10:32 PM, Andrew Lau wrote:
>>
>>> Hi,
>>>
>>> Right now, I have a 2x1 replica. Ever since I had to reinstall one of
>>> the gluster servers, there's been issues with split-brain. The self-heal
>>> daemon doesn't seem to be running on either of the nodes.
>>>
>>> To reinstall the gluster server (the original brick data was intact but
>>> the OS had to be reinstalled)
>>> - Reinstalled gluster
>>> - Copied over the old uuid from backup
>>> - gluster peer probe
>>> - gluster volume sync $othernode all
>>> - mount -t glusterfs localhost:STORAGE /mnt
>>> - find /mnt -noleaf -print0 | xargs --null stat >/dev/null
>>> 2>/var/log/glusterfs/mnt-selfheal.log
>>>
>>> I let it resync and it was working fine, atleast so I thought. I just
>>> came back a few days later to see there's a miss match in the brick
>>> volumes. One is 50GB ahead of the other.
>>>
>>> # gluster volume heal STORAGE info
>>> Status: self-heal-daemon is not running on
>>> 966456a1-b8a6-4ca8-9da7-d0eb96997cbe
>>>
>>> /var/log/gluster/glustershd.log doesn't seem to have any recent logs,
>>> only those from when the two original gluster servers were running.
>>>
>>> # gluster volume status
>>>
>>> Self-heal Daemon on localhost N/A N N/A
>>>
>>> Any suggestions would be much appreciated!
>>>
>>> Cheers
>>> Andrew.
>>>
>>
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-Heal Daemon not Running

2013-09-24 Thread Mohit Anchlia
What's the output of

gluster volume heal $VOLUME info split-brain


On Tue, Sep 24, 2013 at 5:33 PM, Andrew Lau  wrote:

> Found the BZ https://bugzilla.redhat.com/show_bug.cgi?id=960190 - so I
> restarted one of the volumes and it seems to have restarted the all daemons
> again.
>
> Self heal started again, but I seem to have split-brain issues everywhere.
> There's over 100 different entries on each node, what's the best way to
> restore this now? Short of having to manually go through and delete 200+
> files. It looks like a full split brain as the file sizes on the different
> nodes are out of balance by about 100GB or so.
>
> Any suggestions would be much appreciated!
>
> Cheers.
>
> On Tue, Sep 24, 2013 at 10:32 PM, Andrew Lau wrote:
>
>> Hi,
>>
>> Right now, I have a 2x1 replica. Ever since I had to reinstall one of the
>> gluster servers, there's been issues with split-brain. The self-heal daemon
>> doesn't seem to be running on either of the nodes.
>>
>> To reinstall the gluster server (the original brick data was intact but
>> the OS had to be reinstalled)
>> - Reinstalled gluster
>> - Copied over the old uuid from backup
>> - gluster peer probe
>> - gluster volume sync $othernode all
>> - mount -t glusterfs localhost:STORAGE /mnt
>> - find /mnt -noleaf -print0 | xargs --null stat >/dev/null
>> 2>/var/log/glusterfs/mnt-selfheal.log
>>
>> I let it resync and it was working fine, atleast so I thought. I just
>> came back a few days later to see there's a miss match in the brick
>> volumes. One is 50GB ahead of the other.
>>
>> # gluster volume heal STORAGE info
>> Status: self-heal-daemon is not running on
>> 966456a1-b8a6-4ca8-9da7-d0eb96997cbe
>>
>> /var/log/gluster/glustershd.log doesn't seem to have any recent logs,
>> only those from when the two original gluster servers were running.
>>
>> # gluster volume status
>>
>> Self-heal Daemon on localhost N/A N N/A
>>
>> Any suggestions would be much appreciated!
>>
>> Cheers
>> Andrew.
>>
>
>
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Self-Heal Daemon not Running

2013-09-24 Thread Andrew Lau
Found the BZ https://bugzilla.redhat.com/show_bug.cgi?id=960190 - so I
restarted one of the volumes and it seems to have restarted the all daemons
again.

Self heal started again, but I seem to have split-brain issues everywhere.
There's over 100 different entries on each node, what's the best way to
restore this now? Short of having to manually go through and delete 200+
files. It looks like a full split brain as the file sizes on the different
nodes are out of balance by about 100GB or so.

Any suggestions would be much appreciated!

Cheers.

On Tue, Sep 24, 2013 at 10:32 PM, Andrew Lau  wrote:

> Hi,
>
> Right now, I have a 2x1 replica. Ever since I had to reinstall one of the
> gluster servers, there's been issues with split-brain. The self-heal daemon
> doesn't seem to be running on either of the nodes.
>
> To reinstall the gluster server (the original brick data was intact but
> the OS had to be reinstalled)
> - Reinstalled gluster
> - Copied over the old uuid from backup
> - gluster peer probe
> - gluster volume sync $othernode all
> - mount -t glusterfs localhost:STORAGE /mnt
> - find /mnt -noleaf -print0 | xargs --null stat >/dev/null
> 2>/var/log/glusterfs/mnt-selfheal.log
>
> I let it resync and it was working fine, atleast so I thought. I just came
> back a few days later to see there's a miss match in the brick volumes. One
> is 50GB ahead of the other.
>
> # gluster volume heal STORAGE info
> Status: self-heal-daemon is not running on
> 966456a1-b8a6-4ca8-9da7-d0eb96997cbe
>
> /var/log/gluster/glustershd.log doesn't seem to have any recent logs, only
> those from when the two original gluster servers were running.
>
> # gluster volume status
>
> Self-heal Daemon on localhost N/A N N/A
>
> Any suggestions would be much appreciated!
>
> Cheers
> Andrew.
>
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Self-Heal Daemon not Running

2013-09-24 Thread Andrew Lau
Hi,

Right now, I have a 2x1 replica. Ever since I had to reinstall one of the
gluster servers, there's been issues with split-brain. The self-heal daemon
doesn't seem to be running on either of the nodes.

To reinstall the gluster server (the original brick data was intact but the
OS had to be reinstalled)
- Reinstalled gluster
- Copied over the old uuid from backup
- gluster peer probe
- gluster volume sync $othernode all
- mount -t glusterfs localhost:STORAGE /mnt
- find /mnt -noleaf -print0 | xargs --null stat >/dev/null
2>/var/log/glusterfs/mnt-selfheal.log

I let it resync and it was working fine, atleast so I thought. I just came
back a few days later to see there's a miss match in the brick volumes. One
is 50GB ahead of the other.

# gluster volume heal STORAGE info
Status: self-heal-daemon is not running on
966456a1-b8a6-4ca8-9da7-d0eb96997cbe

/var/log/gluster/glustershd.log doesn't seem to have any recent logs, only
those from when the two original gluster servers were running.

# gluster volume status

Self-heal Daemon on localhost N/A N N/A

Any suggestions would be much appreciated!

Cheers
Andrew.
___
Gluster-users mailing list
Gluster-users@gluster.org
http://supercolony.gluster.org/mailman/listinfo/gluster-users