On 9/18/20 8:53 AM, Liam MacKenzie wrote:
I have a scenario where I'm upgrading to ceph octopus on hardware that groups 
its drives in trays which contain 2 devices each.  Previously these drives were 
joined in a software RAID1 and the md devices were used as the OSDs.  The logic 
behind this is that should one of those drives fail, both will need to be 
removed at the same time due to the design of the machine.

For example:
https://www.servethehome.com/supermicro-ssg-6047r-e1r72l-72x-35-drive-4u-storage-server-released/

As I understand that using RAID isn't recommended, how would I best deploy my 
cluster so it's smart enough to group drives according to the trays that 
they're in?

Just as usual - one drive is one osd. When your failure domain is "host" or "rack" just stop both of osd daemons and do maintenance.



k
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to