[ 
https://issues.apache.org/jira/browse/HADOOP-12744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15127178#comment-15127178
 ] 

Kai Zheng commented on HADOOP-12744:
------------------------------------

[~zhz], would you help convert this to a HDFS one and give it a review? Thanks!

> Refactoring of checksum failure report related codes
> ----------------------------------------------------
>
>                 Key: HADOOP-12744
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12744
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Kai Zheng
>            Assignee: Kai Zheng
>         Attachments: HADOOP-12744-v1.patch, HADOOP-12744-v2.patch
>
>
> This was from discussion with [~jingzhao] in HDFS-9646. There is some 
> duplicate codes between client and datanode sides:
> {code}
>     private void addCorruptedBlock(ExtendedBlock blk, DatanodeInfo node,
>         Map<ExtendedBlock, Set<DatanodeInfo>> corruptionMap) {
>       Set<DatanodeInfo> dnSet = corruptionMap.get(blk);
>       if (dnSet == null) {
>         dnSet = new HashSet<>();
>         corruptionMap.put(blk, dnSet);
>       }
>       if (!dnSet.contains(node)) {
>         dnSet.add(node);
>       }
>     }
> {code}
> This would resolve the duplication and also simplify the codes some bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to