If your crushmap is set to replicate by host you would only ever have one
copy on a single host, no matter how many OSD’s you placed on a single
NVME/disk.

But yes you would not want to mix OSD based rules and multiple OSD per a
physical disk.

On Tue, 5 Mar 2019 at 7:54 PM, Marc Roos <m.r...@f1-outsourcing.eu> wrote:

>
> I see indeed lately people writing about putting 2 osd on a nvme, but
> does this not undermine the idea of having 3 copies on different
> osds/drives? In theory you could loose 2 copies when one disk fails???
>
>
>
>
> -----Original Message-----
> From: Darius Kasparaviius [mailto:daz...@gmail.com]
> Sent: 05 March 2019 10:50
> To: ceph-users
> Subject: [ceph-users] Ceph cluster on AMD based system.
>
> Hello,
>
>
> I was thinking of using AMD based system for my new nvme based cluster.
> In particular I'm looking at
> https://www.supermicro.com/Aplus/system/1U/1113/AS-1113S-WN10RT.cfm
> and https://www.amd.com/en/products/cpu/amd-epyc-7451 CPU's. Have anyone
> tried running it on this particular hardware?
>
> General idea is 6 nodes with 10 nvme drives and 2 osds per nvme drive.
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to