Answer following:
- Il 16-set-19, alle 14:49, Ronny Aasen ronny+pve-u...@aasen.cx ha scritto:
> with 2 rooms there is no way to avoid a split brain situation unless you
> have a tie breaker outside one of those 2 rooms.
>
> Run a Mon on a neutral third location is the quick, correct, and sim
with 2 rooms there is no way to avoid a split brain situation unless you
have a tie breaker outside one of those 2 rooms.
Run a Mon on a neutral third location is the quick, correct, and simple
solution.
Or
you need to have a master-slave situation where one room is the master
(3 mons) and
THank you Humberto, but my problem is not related on proxmox quorum, but ceph
mon quorum.
Regards, Fabrizio
- Il 16-set-19, alle 12:58, Humberto Jose De Sousa
ha scritto:
> Hi.
> You could try the qdevice:
> https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_s
Another 5.3 fix that might be interesting for some is
https://github.com/lxc/lxd/issues/5193#issuecomment-502857830 which allows (or
takes us one step closer) to running a kubelet in LXC containers.
On 16.09.19, 12:55, "pve-user on behalf of Gilberto Nunes"
wrote:
Oh! I sorry! I didn't
--- Begin Message ---
Hi.
You could try the qdevice:
https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support
Humberto
De: "Fabrizio Cuseo"
Para: "pve-user"
Enviadas: Sexta-feira, 13 de setembro de 2019 16:42:06
Assunto: [PVE-User] Ceph MON quorum problem
H
Oh! I sorry! I didn't sent the link which I referred to
https://www.phoronix.com/scan.php?page=news_item&px=Ceph-Linux-5.3-Changes
---
Gilberto Nunes Ferreira
(47) 3025-5907
(47) 99676-7530 - Whatsapp / Telegram
Skype: gilberto.nunes36
Em seg, 16 de set de 2019 às 05:50, Ronny Aasen
escr
On 15.09.2019 22:55, Joe Garvey wrote:
Hello all,
I had to reboot a QEMU based VM yesterday and after rebooting it reported there
was no boot disk. The disk has lost all content in the hard drive. There aren't
even any partition. I booted the VM with acronis disk recovery and it showed
the di
On 16.09.2019 03:17, Gilberto Nunes wrote:
Hi there
I read this about kernel 5.3 and ceph, and I am curious...
I have a 6 nodes proxmox ceph cluster with luminous...
Should be a good idea to user kernel 5.3 from here:
https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.3/
---
Gilberto Nunes Ferre
Hi,
After upgrading our 4 node cluster from PVE 5 to 6, we experience
constant crashed (once every 2 days).
Those crashes seem related to corosync.
Since numerous users are reporting sych issues (broken cluster after
upgrade, unstabilities, ...) I wonder if it is possible to downgrade
coro