[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-29 Thread Robert Sander

On 29.03.23 01:09, Robert W. Eckert wrote:


I did miss seeing the db_devices part. For ceph orch apply -  that would have 
saved a lot of effort.  Does the osds_per_device create the partitions on the 
db device?


No, osds_per_device creates multiple OSDs on one data device, could be 
useful for NVMe, do no use on HDD.


The command automatically creates the number of db slots on the 
db_device based on how many data_devices you pass it.


If you want more slots for the RocksDB then pass it the db_slots parameter.


Also is there any way to disable --all-available-devices if it was turned on.

The
ceph orch apply osd --all-available-devices --unmanaged=true

command doesn't seem to disable the behavior of adding new drives.


You can set the service to unmanaged when exporting the specification.

ceph orch ls osd --export > osd.yml

Edit osd.yml and add "unmanaged: true" to the specification. After that

ceph orch apply -i osd.yml

Or you could just remove the specification with "ceph orch rm NAME".
The OSD service will be removed but the OSD will remain.

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-28 Thread Robert W. Eckert
Thanks - 
I am on 17.2.5 - I was able to get to it by cephadm shell, a few zap and 
deletes of the /dev/sdx  and  ceph-volume lvm-prepare.   

I did miss seeing the db_devices part. For ceph orch apply -  that would have 
saved a lot of effort.  Does the osds_per_device create the partitions on the 
db device?

Also is there any way to disable --all-available-devices if it was turned on.   

The 
ceph orch apply osd --all-available-devices --unmanaged=true

command doesn't seem to disable the behavior of adding new drives.


-Original Message-
From: Robert Sander  
Sent: Tuesday, March 28, 2023 3:50 AM
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Adding new server to existing ceph cluster - with 
separate block.db on NVME

Hi,

On 28.03.23 05:42, Robert W. Eckert wrote:
> 
> I am trying to add a new server to an existing cluster, but cannot get 
> the OSDs to create correctly When I try Cephadm ceph-volume lvm 
> create, it returns nothing but the container info.
>

You are running a containerized cluster with the cephadm orchestrator?
Which version?

Have you tried

ceph orch daemon add osd 
host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/nvme0

as shown on https://docs.ceph.com/en/quincy/cephadm/services/osd/ ?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin 
___
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Adding new server to existing ceph cluster - with separate block.db on NVME

2023-03-28 Thread Robert Sander

Hi,

On 28.03.23 05:42, Robert W. Eckert wrote:


I am trying to add a new server to an existing cluster, but cannot get the OSDs 
to create correctly
When I try
Cephadm ceph-volume lvm create, it returns nothing but the container info.



You are running a containerized cluster with the cephadm orchestrator?
Which version?

Have you tried

ceph orch daemon add osd 
host1:data_devices=/dev/sda,/dev/sdb,db_devices=/dev/nvme0

as shown on https://docs.ceph.com/en/quincy/cephadm/services/osd/ ?

Regards
--
Robert Sander
Heinlein Consulting GmbH
Schwedter Str. 8/9b, 10119 Berlin

https://www.heinlein-support.de

Tel: 030 / 405051-43
Fax: 030 / 405051-19

Amtsgericht Berlin-Charlottenburg - HRB 220009 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io