On 1/11/23 14:46, Eneko Lacunza via pve-user wrote:
Hi,

El 11/1/23 a las 12:19, Piviul escribió:
On 1/11/23 10:39, Eneko Lacunza via pve-user wrote:
You should change your public_network to 192.168.255.0/24 .

So the public_network is the pve communication network? I can edit directly the /etc/pve/ceph.conf and then corosync should change the ceph.conf on the others nodes?

Sorry, I misread your info:

$ ip route
default via 192.168.64.1 dev vmbr0 proto kernel onlink
192.168.64.0/20 dev vmbr0 proto kernel scope link src 192.168.70.30
192.168.254.0/24 dev vmbr2 proto kernel scope link src 192.168.254.1
192.168.255.0/24 dev vmbr1 proto kernel scope link src 192.168.255.1

vmbr2 is the CEPH network, vmbr1 is the PVE network and vmbr0 is the LAN network. So you suggest me to add first the 3 ceph monitors using the CEPH IPs network and then destroy the 3 monitors having LAN IPs?

You should set it to 192.168.254.0/24, as that's your ceph net.

many thanks Eneko, so in ceph.conf I have to set the cluster_network and public_network to the same subnet? Furtermore a last question... to change the content of ceph.conf I can edit it in one of the pve nodes?

Piviul
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to