[ 
https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14681307#comment-14681307
 ] 

Zhe Zhang commented on HDFS-7285:
---------------------------------

[~wheat9] [~jingzhao] [~walter.k.su] Thanks for the discussion. 

bq. at the very least you will need consensuses with other contributors anyway.
Absolutely. I created the {{HDFS-7285-merge}} branch exactly because we haven't 
reached a consensus on whether / how to squash the commits in the current 
HDFS-7285 branch when rebased against trunk. Meanwhile, we do need to commit 
patches of the pending subtasks. Committing to the current HDFS-7285 branch 
will cause additional rebasing efforts. 

bq. The new feature branch Zhe proposed here is for merging the changes from 
trunk to EC dev. Still doing git rebase may be too hard now.
Thanks Jing for helping explaining it. Yes the new branch is just *HDFS-7285 
rebased against trunk* (or from another angle, merging trunk changes to EC). We 
can continue discussing how to split the git history, but the end result (the 
{{HEAD}}) won't be affected.

bq. Can we just sync HDFS-7285 with trunk?
{{HDFS-7285-merge}} is actually already synced with trunk. If we all agree, 
it's actually clearer if we push it as {{HDFS-7285}} and move the current 
{{HDFS-7285}} as a backup branch. 



> Erasure Coding Support inside HDFS
> ----------------------------------
>
>                 Key: HDFS-7285
>                 URL: https://issues.apache.org/jira/browse/HDFS-7285
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>            Reporter: Weihua Jiang
>            Assignee: Zhe Zhang
>         Attachments: Consolidated-20150707.patch, 
> Consolidated-20150806.patch, Consolidated-20150810.patch, ECAnalyzer.py, 
> ECParser.py, HDFS-7285-initial-PoC.patch, 
> HDFS-7285-merge-consolidated-01.patch, 
> HDFS-7285-merge-consolidated-trunk-01.patch, 
> HDFS-7285-merge-consolidated.trunk.03.patch, 
> HDFS-7285-merge-consolidated.trunk.04.patch, 
> HDFS-EC-Merge-PoC-20150624.patch, HDFS-EC-merge-consolidated-01.patch, 
> HDFS-bistriped.patch, HDFSErasureCodingDesign-20141028.pdf, 
> HDFSErasureCodingDesign-20141217.pdf, HDFSErasureCodingDesign-20150204.pdf, 
> HDFSErasureCodingDesign-20150206.pdf, HDFSErasureCodingPhaseITestPlan.pdf, 
> fsimage-analysis-20150105.pdf
>
>
> Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice 
> of data reliability, comparing to the existing HDFS 3-replica approach. For 
> example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, 
> with storage overhead only being 40%. This makes EC a quite attractive 
> alternative for big data storage, particularly for cold data. 
> Facebook had a related open source project called HDFS-RAID. It used to be 
> one of the contribute packages in HDFS but had been removed since Hadoop 2.0 
> for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends 
> on MapReduce to do encoding and decoding tasks; 2) it can only be used for 
> cold files that are intended not to be appended anymore; 3) the pure Java EC 
> coding implementation is extremely slow in practical use. Due to these, it 
> might not be a good idea to just bring HDFS-RAID back.
> We (Intel and Cloudera) are working on a design to build EC into HDFS that 
> gets rid of any external dependencies, makes it self-contained and 
> independently maintained. This design lays the EC feature on the storage type 
> support and considers compatible with existing HDFS features like caching, 
> snapshot, encryption, high availability and etc. This design will also 
> support different EC coding schemes, implementations and policies for 
> different deployment scenarios. By utilizing advanced libraries (e.g. Intel 
> ISA-L library), an implementation can greatly improve the performance of EC 
> encoding/decoding and makes the EC solution even more attractive. We will 
> post the design document soon. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to