Thanks - 
I am on 17.2.5 - I was able to get to it by cephadm shell, a few zap and 
deletes of the /dev/sdx  and  ceph-volume lvm-prepare.   

I did miss seeing the db_devices part. For ceph orch apply -  that would have 
saved a lot of effort.  Does the osds_per_device create the partitions on the 
db device?

Also is there any way to disable --all-available-devices if it was turned on.   

The 
        ceph orch apply osd --all-available-devices --unmanaged=true

command doesn't seem to disable the behavior of adding new drives.


-----Original Message-----
From: Robert Sander <r.san...@heinlein-support.de> 
Sent: Tuesday, March 28, 2023 3:50 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Adding new server to existing ceph cluster - with 
separate block.db on NVME

Hi,

On 28.03.23 05:42, Robert W. Eckert wrote:
> 
> I am trying to add a new server to an existing cluster, but cannot get 
> the OSDs to create correctly When I try Cephadm ceph-volume lvm 
> create, it returns nothing but the container info.
>

You are running a containerized cluster with the cephadm orchestrator?
Which version?

Have you tried

ceph orch daemon add osd 
host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/nvme0

as shown on https://docs.ceph.com/en/quincy/cephadm/services/osd/ ?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin 
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to