Kindly somebody help me with this issue.
Thanks & Regards,
Sangeeta Ramapure
*From:* Sangeeta Ramapure [mailto:sangeeta.ramap...@globallogic.com]
*Sent:* June 09, 2017 4:41 PM
*To:* 'gluster-users@gluster.org'
*Cc:* 'devara...@ericsson.com'
*Subject:* After gluster clean up sub directories b
On 9/06/2017 5:54 PM, Lindsay Mathieson wrote:
I've started the process as above, seems to be going ok - cluster is
going to be unusable for the next couple of days.
Just as an update - I was mistaken in this, cluster was actually quite
usable while this was going on, except for on the new ser
On 11/06/2017 9:23 PM, Atin Mukherjee wrote:
Until and unless server side quorum is not enabled that's not correct.
I/O path should be active even though management plane is down. We can
still get this done by one node after another with out bringing down
all glusterd instances at one go but ju
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>
>
> Il 11 giu 2017 1:00 PM, "Atin Mukherjee" ha scritto:
>
> Yes. And please ensure you do this after bringing down all the glusterd
> instances and then once the peer file is removed from all the node
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" ha scritto:
Yes. And please ensure you do this after bringing down all the glusterd
instances and then once the peer file is removed from all the nodes restart
glusterd on all the nodes one after another.
If you have to bring down all gluster instances b
On Sun, 11 Jun 2017 at 16:26, Lindsay Mathieson
wrote:
> On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
>
> If the dead server doesn't host any volumes (bricks of volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
>
> Is that just the fil
On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
If the dead server doesn't host any volumes (bricks of volumes to be
specific) then you can actually remove the uuid entry from
/var/lib/glusterd from other nodes
Is that just the file entry in "/var/lib/glusterd/peers" ?
e.g I have:
gluster
On Sun, 11 Jun 2017 at 16:03, Lindsay Mathieson
wrote:
> On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
> > If the dead server doesn't host any volumes (bricks of volumes to be
> > specific) then you can actually remove the uuid entry from
> > /var/lib/glusterd from other nodes and restart glusterd
On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
If the dead server doesn't host any volumes (bricks of volumes to be
specific) then you can actually remove the uuid entry from
/var/lib/glusterd from other nodes and restart glusterd instances one
after another as a workaround.
The server hosted a
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> glus
10 matches
Mail list logo