Hi.

I'm currently trying out cephadm, and I got into a state that was a bit
unexpected for me.

I created three host machines in VirtualBox to try out cephadm. All drives
I made for OSD are 20GB in size for simplicity.

Bootstrapped one host with one drive and then added the other two. Then
they directly added all available drives, so now I had 3x20 GB OSDs.

Watching the GUI, it said I could add WAL and DB devices, I never figured
out how to do that in the GUI, but I tried to do it manually.

ceph osd destroy osd.2 --force
ceph-volume lvm zap --destroy /dev/sdb
ceph auth get client.bootstrap-osd >
/var/lib/ceph/bootstrap-osd/ceph.keyring
ceph-volume lvm prepare --data /dev/sdb --block.db /dev/sdc --block.wal
/dev/sdd
ceph-volume lvm activate 2 d4a590eb-c0f6-47bc-a5fa-221bf8541e09

It worked, and I got the new OSD registered, but the strange thing was that
it was 40 GB and half full. Is this expected?

Best regards
Daniel
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to