On 13 June 2017 at 11:15, Atin Mukherjee wrote:
> This looks like a bug in the error code as the error message is wrong.
> I'll take a look at it and get back.
>
I had a thought (they do happen) and tried some further testing.
root@gh1:~# gluster peer status
Number of Peers: 2
Hostname: gh2.b
On Tue, 13 Jun 2017 at 06:39, Lindsay Mathieson
wrote:
>
> On 13 June 2017 at 02:56, Pranith Kumar Karampuri
> wrote:
>
>> We can also do "gluster peer detach force right?
>
>
>
> Just to be sure I setup a test 3 node vm gluster cluster :) then shut down
> one of the nodes and tried to remove i
On 13 June 2017 at 02:56, Pranith Kumar Karampuri
wrote:
> We can also do "gluster peer detach force right?
Just to be sure I setup a test 3 node vm gluster cluster :) then shut down
one of the nodes and tried to remove it.
root@gh1:~# gluster peer status
Number of Peers: 2
Hostname: gh2.b
On 13/06/2017 2:56 AM, Pranith Kumar Karampuri wrote:
We can also do "gluster peer detach force right?
Tried that, didn't work - threw an error.
--
Lindsay Mathieson
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/
On Sun, Jun 11, 2017 at 2:12 PM, Atin Mukherjee wrote:
>
> On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson <
> lindsay.mathie...@gmail.com> wrote:
>
>> On 11/06/2017 10:46 AM, WK wrote:
>> > I thought you had removed vna as defective and then ADDED in vnh as
>> > the replacement?
>> >
>> > Why is
On 11/06/2017 9:23 PM, Atin Mukherjee wrote:
Until and unless server side quorum is not enabled that's not correct.
I/O path should be active even though management plane is down. We can
still get this done by one node after another with out bringing down
all glusterd instances at one go but ju
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
>
>
> Il 11 giu 2017 1:00 PM, "Atin Mukherjee" ha scritto:
>
> Yes. And please ensure you do this after bringing down all the glusterd
> instances and then once the peer file is removed from all the node
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" ha scritto:
Yes. And please ensure you do this after bringing down all the glusterd
instances and then once the peer file is removed from all the nodes restart
glusterd on all the nodes one after another.
If you have to bring down all gluster instances b
On Sun, 11 Jun 2017 at 16:26, Lindsay Mathieson
wrote:
> On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
>
> If the dead server doesn't host any volumes (bricks of volumes to be
> specific) then you can actually remove the uuid entry from
> /var/lib/glusterd from other nodes
>
> Is that just the fil
On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
If the dead server doesn't host any volumes (bricks of volumes to be
specific) then you can actually remove the uuid entry from
/var/lib/glusterd from other nodes
Is that just the file entry in "/var/lib/glusterd/peers" ?
e.g I have:
gluster
On Sun, 11 Jun 2017 at 16:03, Lindsay Mathieson
wrote:
> On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
> > If the dead server doesn't host any volumes (bricks of volumes to be
> > specific) then you can actually remove the uuid entry from
> > /var/lib/glusterd from other nodes and restart glusterd
On 11/06/2017 6:42 PM, Atin Mukherjee wrote:
If the dead server doesn't host any volumes (bricks of volumes to be
specific) then you can actually remove the uuid entry from
/var/lib/glusterd from other nodes and restart glusterd instances one
after another as a workaround.
The server hosted a
On Sun, 11 Jun 2017 at 06:25, Lindsay Mathieson
wrote:
> On 11/06/2017 10:46 AM, WK wrote:
> > I thought you had removed vna as defective and then ADDED in vnh as
> > the replacement?
> >
> > Why is vna still there?
>
> Because I *can't* remove it. It died, was unable to be brought up. The
> glus
On 6/10/2017 6:01 PM, Lindsay Mathieson wrote:
On 11/06/2017 10:57 AM, Lindsay Mathieson wrote:
I did a
|gluster volume set all cluster.server-quorum-ratio 51%|
And that has resolved my issue for now as it allows two servers to
form a quorum.|
|
Edit :)
Actually
|gluster volu
On 6/10/2017 5:54 PM, Lindsay Mathieson wrote:
On 11/06/2017 10:46 AM, WK wrote:
I thought you had removed vna as defective and then ADDED in vnh as
the replacement?
Why is vna still there?
Because I *can't* remove it. It died, was unable to be brought up. The
gluster peer detach command
On 11/06/2017 10:57 AM, Lindsay Mathieson wrote:
I did a
|gluster volume set all cluster.server-quorum-ratio 51%|
And that has resolved my issue for now as it allows two servers to
form a quorum.|
|
Edit :)
Actually
|gluster volume set all cluster.server-quorum-ratio 50%|
--
On 11/06/2017 9:38 AM, Lindsay Mathieson wrote:
Since my node died on friday I have a dead peer (vna) that needs to be
removed.
I had major issues this morning that I haven't resolve yet with all
VM's going offline when I rebooted a node which I *hope * was due to
quorum issues as I now have
On 11/06/2017 10:46 AM, WK wrote:
I thought you had removed vna as defective and then ADDED in vnh as
the replacement?
Why is vna still there?
Because I *can't* remove it. It died, was unable to be brought up. The
gluster peer detach command only works with live servers - A severe
problem
On 6/10/2017 5:12 PM, Lindsay Mathieson wrote:
Three good nodes - vnb, vng, vnh and one dead - vna
from node vng:
root@vng:~# gluster peer status
Number of Peers: 3
Hostname: vna.proxmox.softlog
Uuid: de673495-8cb2-4328-ba00-0419357c03d7
State: Peer in Cluster (Disconnected)
Hostname: vn
On 11/06/2017 10:01 AM, WK wrote:
You replaced vna with vnd but it is probably not fully healed yet cuz
you had 3.8T worth of chunks to copy.
No, the heal had completed. Finished about 9 hours before I rebooted.
So you had two good nodes (vnb and vng) working and you rebooted one
of them?
On 6/10/2017 4:38 PM, Lindsay Mathieson wrote:
Since my node died on friday I have a dead peer (vna) that needs to be
removed.
I had major issues this morning that I haven't resolve yet with all
VM's going offline when I rebooted a node which I *hope * was due to
quorum issues as I now hav
Since my node died on friday I have a dead peer (vna) that needs to be
removed.
I had major issues this morning that I haven't resolve yet with all VM's
going offline when I rebooted a node which I *hope * was due to quorum
issues as I now have four peers in the cluster, one dead, three live.
22 matches
Mail list logo