Hi,

For testing purpose we need to deploy :


- 1 pool of 6 SSD OSD

- 1 pool of 6 HDD OSD

- 1 pool of 6 HDD OSD with 2 SSD for DB+WAL


I tried to orch apply this yaml but it doesn't work as expected


Osd part of the yaml ->


service_type: osd
service_id: osd_spec
placement:
  hosts:
    -  host1
    -  host2
    -  host3
    -  host4
    -  host5
    -  host6
data_devices:
  paths:
    - /dev/sda
    - /dev/sdc
spec:
  data_devices:
    all: true
  filter_logic: AND
  objectstore: bluestore
---
service_type: osd
service_id: osd_spec_wall
placement:
  hosts:
    -  host1
    -  host2
    -  host3
    -  host4
    -  host5
    -  host6

spec:
  data_devices:
    paths:
      - /dev/sdf
  db_devices:
    paths:
      - /dev/sde
    limit: 2
  db_slots: 3



Only 1 db on /dev/sde from host1 has been created and this OSD showed up as 
half full at his creation:

ceph osd df | grep 25

25    hdd  0.63669   1.00000  652 GiB  373 GiB  1.5 MiB   1 KiB   38 MiB  279 
GiB  57.15  35.93    0      up

ceph-volume lvm list

====== osd.25 ======

  [block]       
/dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d

      block device              
/dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
      block uuid                vqJGf8-V5g0-S1cA-BAcN-Qm9D-9VTx-xsc8wk
      cephx lockbox secret
      cluster fsid              id
      cluster name              ceph
      crush device class
      db device                 
/dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
      db uuid                   Rtj5KA-Qxjk-IFmY-3ffQ-gSAu-Snte-uku8HC
      encrypted                 0
      osd fsid                  2f009760-fc2b-46d5-984d-e8200dfd9d9d
      osd id                    25
      osdspec affinity          osd_spec_wall
      type                      block
      vdo                       0
      with tpm                  0
      devices                   /dev/sdf

  [db]          
/dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661

      block device              
/dev/ceph-23d9297a-d0e1-47be-8650-5c8ccae4fe0e/osd-block-2f009760-fc2b-46d5-984d-e8200dfd9d9d
      block uuid                vqJGf8-V5g0-S1cA-BAcN-Qm9D-9VTx-xsc8wk
      cephx lockbox secret
      cluster fsid              id
      cluster name              ceph
      crush device class
      db device                 
/dev/ceph-9c39b87c-a39c-413f-b1ef-07881195fcb8/osd-db-feee5095-b5e7-47a0-ae87-8e5039512661
      db uuid                   Rtj5KA-Qxjk-IFmY-3ffQ-gSAu-Snte-uku8HC
      encrypted                 0
      osd fsid                  2f009760-fc2b-46d5-984d-e8200dfd9d9d
      osd id                    25
      osdspec affinity          osd_spec_wall
      type                      db
      vdo                       0
      with tpm                  0
      devices                   /dev/sde



The other /dev/sde device did show up as data_device instead of db_device 
(example here on host2) :

ceph-volume lvm list

====== osd.17 ======

  [block]       
/dev/ceph-629f98b0-5ed4-4e75-81b9-e85ca76afb15/osd-block-5d43d683-1f7f-4dc1-935e-6a79745252f9

      block device              
/dev/ceph-629f98b0-5ed4-4e75-81b9-e85ca76afb15/osd-block-5d43d683-1f7f-4dc1-935e-6a79745252f9
      block uuid                HQpp1l-x7IB-kA2W-6gWO-BGlM-VN2k-vYf43R
      cephx lockbox secret
      cluster fsid              id
      cluster name              ceph
      crush device class
      encrypted                 0
      osd fsid                  5d43d683-1f7f-4dc1-935e-6a79745252f9
      osd id                    17
      osdspec affinity          osd_spec
      type                      block
      vdo                       0
      with tpm                  0
      devices                   /dev/sde

Thx for your help
Vivien







_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to