--- Begin Message ---
Hi Marco,
El 12/1/22 a las 14:57, Marco Witte escribió:
One wal drive was failing. So I replaced it with:
pveceph osd destroy 17 --cleanup 1
pveceph osd destroy 18 --cleanup 1
pveceph osd destroy 19 --cleanup 1
This removed the three disks and removed the osd-wal.
The Drive sdf is the replacement for the failed wal device that above
three osds ( sdb, sdc,sdd ) used:
pveceph osd create /dev/sdb -wal_dev /dev/sdf
pveceph osd create /dev/sdc -wal_dev /dev/sdf
pveceph osd create /dev/sdd -wal_dev /dev/sdf
This approach worked fine, but took a lot of time.
So I figured it would be better to change the wal for the existing osd:
At this state /dev/sdf is completly empty (has no lvm/wiped) and all
three osd still use the failing wal device /dev/sdh.
ceph-volume lvm new-wal --osd-id 17 --osd-fsid
01234567-1234-1234-123456789012 --target /dev/sdf
Which obviously fails, because the target should be --target
vgname/new_wal
Question part:
What would be the fast way to make the new device /dev/sdf the wal
device, without destroying the osds 17 18 19?
I have moved wal/db between physical partitions to increase size in some
old upgraded clusters.
It should be similar. Try searching "Ceph bluestore db resize".
Otherwise I can send you my procedure with spanish comments... :)
Cheers
Eneko Lacunza
Zuzendari teknikoa | Director técnico
Binovo IT Human Project
Tel. +34 943 569 206 | https://www.binovo.es
Astigarragako Bidea, 2 - 2º izda. Oficina 10-11, 20180 Oiartzun
https://www.youtube.com/user/CANALBINOVO
https://www.linkedin.com/company/37269706/
--- End Message ---
_______________________________________________
pve-user mailing list
[email protected]
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user