Hi,

Thanks for your help!

On 20/05/2024 18:13, Anthony D'Atri wrote:

You do that with the CRUSH rule, not with osd_crush_chooseleaf_type.  Set that 
back to the default value of `1`.  This option is marked `dev` for a reason ;)

OK [though not obviously at https://docs.ceph.com/en/reef/rados/configuration/pool-pg-config-ref/#confval-osd_crush_chooseleaf_type ]

but I think you’d also need to revert `osd_crush_chooseleaf_type` too.  Might 
be better to wipe and redeploy so you know that down the road when you add / 
replace hardware this behavior doesn’t resurface.

Yep, I'm still at the destroy-and-recreate point here, trying to make sure I can do this repeatably.

Once the cluster was up I used an osd spec file that looked like:
service_type: osd
service_id: rrd_single_NVMe
placement:
  label: "NVMe"
spec:
  data_devices:
    rotational: 1
  db_devices:
    model: "NVMe"
Is it your intent to use spinners for payload data and SSD for metadata?

Yes.

You might want to set `db_slots` accordingly, by default I think it’ll be 1:1 
which probably isn’t what you intend.

Is there an easy way to check this? The docs suggested it would work, and vgdisplay on the vg that pvs tells me the nvme device is in shows 24 LVs...

Thanks,

Matthew
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to