I see, thank you for your replies. I will try to implement a
CRUSH-map, which distributes the chunks in such a way.  Do you think
there will be any performance issues with such a map, placing multiple
smaller chunks on each osd instead of one large on each?

The reason we would like to do this, is that we have developed some
new erasure codes, which we would like to test on a large-scale
storage system like Ceph. These erasure codes rely on
sub-packetization to reduce the amount of data read from each disk
when reconstructing an erasure. Since CEPH does not seem to have an
ability to read part of a chunk, we thought we could sort of bypass
this by making each chunk contain one sub-packet, and place a set of
these chunks (sub-packets) on each osd.

2015-10-22 18:59 GMT+02:00 Loic Dachary <[email protected]>:
> Hi,
>
> On 22/10/2015 18:44, Kjetil Babington wrote:
>> Hi,
>>
>> I have a question about the capabilities of the erasure coding API in
>> Ceph. Let's say that I have 10 data disks and 4 parity disks, is it
>> possible to create an erasure coding plugin which creates 20 data
>> chunks and 8 parity chunks, and then places two chunks on each osd?
>>
>> Or said maybe a bit simpler is it possible for two or more chunks from
>> the same encode operation to be placed on the same osd?
>
> This is more a question of creating a crush ruleset that does it. The erasure 
> code plugin encodes chunks but the crush ruleset decides where they are 
> placed.
>
> Cheers
>
>>
>> - Kjetil Babington
>> --
>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
>> the body of a message to [email protected]
>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>>
>
> --
> Loïc Dachary, Artisan Logiciel Libre
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to