> How they did it?

You can create partitions / LVs by hand and build OSDs on them, or you can use

ceph-volume lvm batch –osds-per-device

> I have an idea to create a new bucket type under host, and put two LV from 
> each ceph osd VG into that new bucket. Rules are the same (different host), 
> so redundancy won't be affected

CRUSH lets you do that, but to what end?  It would visually show you a bit more 
clearly which OSDs share a device when you run `ceph osd tree`, maybe some 
operational convenience with `ceph osd ls-tree`, but for placement 
anti-affinity it wouldn’t get you anything you don’t already have.

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to