[ceph-users] Re: How to replace an HDD in a OSD with shared SSD for DB/WAL

2023-04-25 Thread enochlew
Than you for your suggest!
I have already deleated the lvm both of the Block and DB devices.
I monitored the creating process of OSD.23. with the command-line "podman ps 
-a".
The osd.23 apeared for a shorted time,  then was deleated.
The feedback of the command-line 
"compute11:data_devices=/dev/sda,db_devices=/dev/sdc,osds_per_device=1" told me 
"Created osd(s) 23 on host 'compute11'"
The osd was shown in dashboard, but was always down, and cannot be started.
Thans again.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] How to replace an HDD in a OSD with shared SSD for DB/WAL

2023-04-25 Thread enochlew
HI,

I build a Ceph Cluster with cephadm. 
Every cehp node has 4 OSDs. These 4 OSD were build with 4 HDD (block) and 1 SDD 
(DB).
At present , one HDD is broken, and I am trying to replace the HDD,and build 
the OSD with the new HDD and the free space of the SDD. I did the follows:

#ceph osd stop osd.23
#ceph osd out osd.23
#ceph osd crush remove osd.23
#ceph osd rm osd.23
#ceph orch daemon rm osd.23 --force
#lvremove 
/dev/ceph-ae21e618-601e-4273-9185-99180edb8453/osd-block-96eda371-1a3f-4139-9123-24ec1ba362c4
#wipefs -af /dev/sda
#lvremove 
/dev/ceph-e50203a6-8b8e-480f-965c-790e21515395/osd-db-70f7a032-cf2c-4964-b979-2b90f43f2216
#ceph orch daemon add osd 
compute11:data_devices=/dev/sda,db_devices=/dev/sdc,osds_per_device=1

The OSD can be built, but is always down.

Is there anyting that I missed during the building?

Thank you very much!

Regards,

LIUTao
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io