I may be wrong, but your correct with your m=6 statement.

Your need atleast K amount of shards available. If you had k=8 and m=2
equally across 2 rooms (5 each), a faidlure in either room would cause an
outrage.

With M=6 your atleast getting better disk space availability than 3
replication. But not sure if you may end up with some form of split brain
if just was a network issue between both sides and each side was still
online and working independently. As both would technically have enough
shards to continue to operate.

On Fri, 3 May 2019, 11:46 PM Robert Sander, <r.san...@heinlein-support.de>
wrote:

> Hi,
>
> I would be glad if anybody could give me a tip for an erasure code
> profile and an associated crush ruleset.
>
> The cluster spans 2 rooms with each room containing 6 hosts and each
> host has 12 to 16 OSDs.
>
> The failure domain would be the room level, i.e. data should survive if
> one of the rooms has a power loss.
>
> Is that even possible with erasure coding?
> I am only coming up with profiles where m=6, but that seems to be a
> little overkill.
>
> Regards
> --
> Robert Sander
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> https://www.heinlein-support.de
>
> Tel: 030 / 405051-43
> Fax: 030 / 405051-19
>
> Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to