Hi,

I believe Andreas has a more elaborate answer on this topic. The current 
implementation is not as good as what is described in the paper you mention, 
this is correct. My incentive for chosing this path is that I was able to 
understand it, mainly. It is not much more than stacking layers of erasure 
coded chunks on top of each other ;-)

Now that we have this plugin, it would be nice to have another implementation 
that uses less space and possible even less network when reconstructing. During 
the OpenStack summit we discussed this with Kevin Greenan and there are 
promising directions. It would help a lot to have a sample code to study so 
that it can be adapted to what we have currently in Ceph. Do you know of such 
an implementation of LRC or other similar code designed to reduce the network 
bandwidth during reconstruction ?

Cheers

On 17/11/2014 01:52, Zhou, Yuan wrote:
> Hi Loic/Anderas,
> 
>  
> 
> I was trying to understand the LRC design in Ceph EC. Per my understanding, 
> it seems Ceph was using a slightly different design with the Microsoft LRC: 
> the local parities were calculated with the global parities included. Is 
> there any special consideration on this change?
> 
> I was asking because in a typical MS LRC design the global and local parities 
> could be calculated at the same time actually(I mean inside the Erasure Code 
> library). But with this new design, we lost this potential optimization.
> 
>  
> 
> Thanks, -Yuan
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to