[ https://issues.apache.org/jira/browse/HDFS-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391274#comment-14391274 ]
Tsz Wo Nicholas Sze commented on HDFS-7285: ------------------------------------------- Here is the list we discussed. h2. Phase 1 – Basic EC features - Support (6,3)-Reed-Solomon - Read -* from closed EC files -* from files with some missing blocks - Write -* Write to 9 datanodes in parallel -* Failure handling: continue writing with the remaining datanodes as long as #existing datanodes >= 6. - EC blocks reconstruction -* Scheduled by NN like replication -* Datanode executes block group reconstruction - Block group lease recovery -* Datanode executes lease recovery -* Truncate at stripe group boundary - NN changes -* EC block group placement -* EC zone -* Safemode calculation -* Quota -* Block report processing -* Snapshot -* Fsck -* Editlog/image -* Block group support -* EC file deletion -* Decommission -* Corrupted EC blocks -* ID collision - Balancer/Mover -* Do not move EC blocks - Documentation - Testing > Erasure Coding Support inside HDFS > ---------------------------------- > > Key: HDFS-7285 > URL: https://issues.apache.org/jira/browse/HDFS-7285 > Project: Hadoop HDFS > Issue Type: New Feature > Reporter: Weihua Jiang > Assignee: Zhe Zhang > Attachments: ECAnalyzer.py, ECParser.py, HDFS-7285-initial-PoC.patch, > HDFSErasureCodingDesign-20141028.pdf, HDFSErasureCodingDesign-20141217.pdf, > HDFSErasureCodingDesign-20150204.pdf, HDFSErasureCodingDesign-20150206.pdf, > fsimage-analysis-20150105.pdf > > > Erasure Coding (EC) can greatly reduce the storage overhead without sacrifice > of data reliability, comparing to the existing HDFS 3-replica approach. For > example, if we use a 10+4 Reed Solomon coding, we can allow loss of 4 blocks, > with storage overhead only being 40%. This makes EC a quite attractive > alternative for big data storage, particularly for cold data. > Facebook had a related open source project called HDFS-RAID. It used to be > one of the contribute packages in HDFS but had been removed since Hadoop 2.0 > for maintain reason. The drawbacks are: 1) it is on top of HDFS and depends > on MapReduce to do encoding and decoding tasks; 2) it can only be used for > cold files that are intended not to be appended anymore; 3) the pure Java EC > coding implementation is extremely slow in practical use. Due to these, it > might not be a good idea to just bring HDFS-RAID back. > We (Intel and Cloudera) are working on a design to build EC into HDFS that > gets rid of any external dependencies, makes it self-contained and > independently maintained. This design lays the EC feature on the storage type > support and considers compatible with existing HDFS features like caching, > snapshot, encryption, high availability and etc. This design will also > support different EC coding schemes, implementations and policies for > different deployment scenarios. By utilizing advanced libraries (e.g. Intel > ISA-L library), an implementation can greatly improve the performance of EC > encoding/decoding and makes the EC solution even more attractive. We will > post the design document soon. -- This message was sent by Atlassian JIRA (v6.3.4#6332)