You might want to remove

encrypted: true

BR Stephan





Am Di., 30. Sept. 2025 um 14:57 Uhr schrieb Stephan Hohn <
[email protected]>:

> Hi Vivien,
>
> You need to create first the OSDs, then corresponding crush rules and
> after that pools with that crush rule.
>
> This might work if each host contains of 12 HDDs and 8 SSDs:
>
> # OSD spec yaml
>
> ---
> service_type: osd
> service_id: osd_1_ssd
> placement:
>   host_pattern: '*'
> spec:
>   data_devices:
>     rotational: 0
>     limit: 6
>   encrypted: true
> ---
> service_type: osd
> service_id: osd_2_hdd_blockdb_on_ssd
> placement:
>   host_pattern: '*'
> spec:
>   data_devices:
>     rotational: 1
>     limit: 6
>   encrypted: true
>   block_db_size: 549755813888
>   db_slots: 3
>   db_devices:
>     rotational: 0
> ---
> service_type: osd
> service_id: osd_3_hddonly
> placement:
>   host_pattern: '*'
> spec:
>   data_devices:
>     rotational: 1
>     limit: 6
>   crush_device_class: hddonly
>
>
> # Crush rules
>
> ceph osd crush rule create-replicated replicated_ssd default host ssd
> ceph osd crush rule create-replicated replicated_hdd default host hdd
> ceph osd crush rule create-replicated replicated_hddonly default host
> hddonly
>
>
> # Pools
>
> ~# ceph osd pool create {pool-name} [{pg-num} [{pgp-num}]] [replicated] \
>          [crush-rule-name]
>
> ceph osd pool create ssd-pool 32 32 replicated_ssd
> ceph osd pool create hdd-pool 32 32 replicated_hdd
> ceph osd pool create hddonly-pool 32 32 replicated_hddonly
>
>
> Best Stephan
>
>
>
> Am Di., 30. Sept. 2025 um 14:06 Uhr schrieb GLE, Vivien <
> [email protected]>:
>
>> Hi,
>>
>>
>> For testing purpose we need to deploy :
>>
>>
>> - 1 pool of 6 SSD OSD
>>
>> - 1 pool of 6 HDD OSD
>>
>> - 1 pool of 6 HDD OSD with 2 SSD for DB+WAL
>>
>>
>> I tried to orch apply this yaml but it doesn't work as expected
>>
>>
>> Osd part of the yaml ->
>>
>>
>> service_type: osd
>> service_id: osd_spec
>> placement:
>>   hosts:
>>     -  host1
>>     -  host2
>>     -  host3
>>     -  host4
>>     -  host5
>>     -  host6
>> data_devices:
>>   paths:
>>     - /dev/sda
>>     - /dev/sdc
>> spec:
>>   data_devices:
>>     all: true
>>   filter_logic: AND
>>   objectstore: bluestore
>> ---
>> service_type: osd
>> service_id: osd_spec_wall
>> placement:
>>   hosts:
>>     -  host1
>>     -  host2
>>     -  host3
>>     -  host4
>>     -  host5
>>     -  host6
>>
>> spec:
>>   data_devices:
>>     paths:
>>       - /dev/sdf
>>   db_devices:
>>     paths:
>>       - /dev/sde
>>     limit: 2
>>   db_slots: 3
>>
>>
>>
>> Only 1 db on /dev/sde from host1 has been created and this OSD showed up
>> as half full at his creation:
>>
>> ceph osd df | grep 25
>>
>> 25    hdd  0.63669   1.00000  652 GiB  373 GiB  1.5 MiB   1 KiB   38 MiB
>> 279 GiB  57.15  35.93    0      up
>>
>> ceph-volume lvm list
>>
>> ====== osd.25 ======
>>
>>   [block]
>>  
>> /dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
>>
>>       block device
>> /dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
>>       block uuid                vqJGf8-V5g0-S1cA-BAcN-Qm9D-9VTx-xsc8wk
>>       cephx lockbox secret
>>       cluster fsid              id
>>       cluster name              ceph
>>       crush device class
>>       db device
>>  
>> /dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
>>       db uuid                   Rtj5KA-Qxjk-IFmY-3ffQ-gSAu-Snte-uku8HC
>>       encrypted                 0
>>       osd fsid                  2f009760-fc2b-46d5-984d-e8200dfd9d9d
>>       osd id                    25
>>       osdspec affinity          osd_spec_wall
>>       type                      block
>>       vdo                       0
>>       with tpm                  0
>>       devices                   /dev/sdf
>>
>>   [db]
>> /dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
>>
>>       block device
>> /dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
>>       block uuid                vqJGf8-V5g0-S1cA-BAcN-Qm9D-9VTx-xsc8wk
>>       cephx lockbox secret
>>       cluster fsid              id
>>       cluster name              ceph
>>       crush device class
>>       db device
>>  
>> /dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
>>       db uuid                   Rtj5KA-Qxjk-IFmY-3ffQ-gSAu-Snte-uku8HC
>>       encrypted                 0
>>       osd fsid                  2f009760-fc2b-46d5-984d-e8200dfd9d9d
>>       osd id                    25
>>       osdspec affinity          osd_spec_wall
>>       type                      db
>>       vdo                       0
>>       with tpm                  0
>>       devices                   /dev/sde
>>
>>
>>
>> The other /dev/sde device did show up as data_device instead of db_device
>> (example here on host2) :
>>
>> ceph-volume lvm list
>>
>> ====== osd.17 ======
>>
>>   [block]
>>  
>> /dev/ceph-629f98b0-5ed4-4e75-81b9-e85ca76afb15/osd-block-5d43d683-1f7f-4dc1-935e-6a79745252f9
>>
>>       block device
>> /dev/ceph-629f98b0-5ed4-4e75-81b9-e85ca76afb15/osd-block-5d43d683-1f7f-4dc1-935e-6a79745252f9
>>       block uuid                HQpp1l-x7IB-kA2W-6gWO-BGlM-VN2k-vYf43R
>>       cephx lockbox secret
>>       cluster fsid              id
>>       cluster name              ceph
>>       crush device class
>>       encrypted                 0
>>       osd fsid                  5d43d683-1f7f-4dc1-935e-6a79745252f9
>>       osd id                    17
>>       osdspec affinity          osd_spec
>>       type                      block
>>       vdo                       0
>>       with tpm                  0
>>       devices                   /dev/sde
>>
>> Thx for your help
>> Vivien
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list -- [email protected]
>> To unsubscribe send an email to [email protected]
>>
>
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to