[ 
https://issues.apache.org/jira/browse/HDFS-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14253798#comment-14253798
 ] 

Colin Patrick McCabe edited comment on HDFS-7443 at 12/19/14 6:59 PM:
----------------------------------------------------------------------

bq. HDFS-6931 introduced a resolveDuplicateReplicas to handle the duplicated 
blk from diff volume, this jira is to handle the dup in the same volume, am i 
right?

Right.  {{resolveDuplicateReplicas}} deals with multiple replicas on the same 
DataNode in different volumes.  The HDFS-7443 code is for the intra-datanode 
case.  Additionally, {{resolveDuplicateReplicas}} requires a {{VolumeMap}}, 
{{ReplicaMap}}, and other internal data structures.  We don't have any of those 
here, just a list of files to be symlinked.  The rules of resolution are the 
same, though.

bq. Findbugs said: Exceptional return value of java.io.File.delete() ignored in 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.deleteTmpFiles(List)

This findbugs warning is unrelated, as are the unit test failures


was (Author: cmccabe):
bq. HDFS-6931 introduced a resolveDuplicateReplicas to handle the duplicated 
blk from diff volume, this jira is to handle the dup in the same volume, am i 
right?

My understanding is that {{resolveDuplicateReplicas}} deals with multiple 
replicas on the same DataNode in different volumes.  This is for the 
intra-datanode case.  Additionally, {{resolveDuplicateReplicas}} requires a 
{{VolumeMap}}, {{ReplicaMap}}, and other internal data structures.  We don't 
have any of those here, just a list of files to be symlinked.  The rules of 
resolution are the same, though.

bq. Findbugs said: Exceptional return value of java.io.File.delete() ignored in 
org.apache.hadoop.hdfs.server.namenode.TransferFsImage.deleteTmpFiles(List)

This findbugs warning is unrelated, as are the unit test failures

> Datanode upgrade to BLOCKID_BASED_LAYOUT fails if duplicate block files are 
> present in the same volume
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7443
>                 URL: https://issues.apache.org/jira/browse/HDFS-7443
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: Kihwal Lee
>            Assignee: Colin Patrick McCabe
>            Priority: Blocker
>         Attachments: HDFS-7443.001.patch, HDFS-7443.002.patch
>
>
> When we did an upgrade from 2.5 to 2.6 in a medium size cluster, about 4% of 
> datanodes were not coming up.  They treid data file layout upgrade for 
> BLOCKID_BASED_LAYOUT introduced in HDFS-6482, but failed.
> All failures were caused by {{NativeIO.link()}} throwing IOException saying 
> {{EEXIST}}.  The data nodes didn't die right away, but the upgrade was soon 
> retried when the block pool initialization was retried whenever 
> {{BPServiceActor}} was registering with the namenode.  After many retries, 
> datenodes terminated.  This would leave {{previous.tmp}} and {{current}} with 
> no {{VERSION}} file in the block pool slice storage directory.  
> Although {{previous.tmp}} contained the old {{VERSION}} file, the content was 
> in the new layout and the subdirs were all newly created ones.  This 
> shouldn't have happened because the upgrade-recovery logic in {{Storage}} 
> removes {{current}} and renames {{previous.tmp}} to {{current}} before 
> retrying.  All successfully upgraded volumes had old state preserved in their 
> {{previous}} directory.
> In summary there were two observed issues.
> - Upgrade failure with {{link()}} failing with {{EEXIST}}
> - {{previous.tmp}} contained not the content of original {{current}}, but 
> half-upgraded one.
> We did not see this in smaller scale test clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to