014 09:38
To: Kyle Bader
Cc: ceph-devel@vger.kernel.org
Subject: Re: Pyramid erasure codes and replica hinted recovery
On 13/01/2014 03:35, Kyle Bader wrote:
>> How is it different from what is described above? There must be something I
>> fail to understand.
>
> No misunderstand
On 13/01/2014 03:35, Kyle Bader wrote:
>> How is it different from what is described above? There must be something I
>> fail to understand.
>
> No misunderstanding on your part, on second look that does achieve the
> desired placement. Could you please help walk me through the following
> scen
> How is it different from what is described above? There must be something I
> fail to understand.
No misunderstanding on your part, on second look that does achieve the
desired placement. Could you please help walk me through the following
scenarios:
Can data or local parity chunks that have b
On 12/01/2014 15:31, Kyle Bader wrote:
>> If we had RS(6:3:3) 6 data chunks, 3 coding chunks, 3 local chunks, the
>> following rule could be used to spread it over 3 datacenters:
>>
>> rule erasure_ruleset {
>> ruleset 1
>> type erasure
>> min_size 3
>> max_size 2
> If we had RS(6:3:3) 6 data chunks, 3 coding chunks, 3 local chunks, the
> following rule could be used to spread it over 3 datacenters:
>
> rule erasure_ruleset {
> ruleset 1
> type erasure
> min_size 3
> max_size 20
> step set_chooseleaf_tries 5
>
On 11/01/2014 00:40, Kyle Bader wrote:
> I've been researching what features might be necessary in Ceph to
> build multi-site RADOS clusters, whether for purposes of scale or to
> meet SLA requirements more stringent than is achievable with a single
> datacenter. According to [1], "typical [datac
I've been researching what features might be necessary in Ceph to
build multi-site RADOS clusters, whether for purposes of scale or to
meet SLA requirements more stringent than is achievable with a single
datacenter. According to [1], "typical [datacenter] availability
estimates used in the industr