[ 
https://issues.apache.org/jira/browse/HDFS-7443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14252640#comment-14252640
 ] 

Colin Patrick McCabe commented on HDFS-7443:
--------------------------------------------

bq. We wouldn't need all that. A length check on src and dst when we hit an 
exception should suffice right, depending on the result either discard src or 
overwrite dst? Anyway I think your patch is fine to go as it is.

The problem is, what happens if another thread comes along and starts modifying 
the replica while we're measuring the length.  I can come up with an 
interleaving like:

thread #1 receives EEXIST from link()
thread #2 receives EEXIST from link()
thread #2 does stat() on block file
thread #1 does stat() on block file
thread #1 replaces block file because old copy was too short
thread #2 replaces block file because old copy was too short

Now, if thread #1's copy was actually longer than thread #2's, we aren't 
getting the longest replica after all.

Hence my suggestion to move the questionable replicas to a special folder and 
process them after joining all threads.  Still doesn't solve the issue of 
replicas with different genstamps, either...

> Datanode upgrade to BLOCKID_BASED_LAYOUT fails if duplicate block files are 
> present in the same volume
> ------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-7443
>                 URL: https://issues.apache.org/jira/browse/HDFS-7443
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>            Reporter: Kihwal Lee
>            Assignee: Colin Patrick McCabe
>            Priority: Blocker
>         Attachments: HDFS-7443.001.patch
>
>
> When we did an upgrade from 2.5 to 2.6 in a medium size cluster, about 4% of 
> datanodes were not coming up.  They treid data file layout upgrade for 
> BLOCKID_BASED_LAYOUT introduced in HDFS-6482, but failed.
> All failures were caused by {{NativeIO.link()}} throwing IOException saying 
> {{EEXIST}}.  The data nodes didn't die right away, but the upgrade was soon 
> retried when the block pool initialization was retried whenever 
> {{BPServiceActor}} was registering with the namenode.  After many retries, 
> datenodes terminated.  This would leave {{previous.tmp}} and {{current}} with 
> no {{VERSION}} file in the block pool slice storage directory.  
> Although {{previous.tmp}} contained the old {{VERSION}} file, the content was 
> in the new layout and the subdirs were all newly created ones.  This 
> shouldn't have happened because the upgrade-recovery logic in {{Storage}} 
> removes {{current}} and renames {{previous.tmp}} to {{current}} before 
> retrying.  All successfully upgraded volumes had old state preserved in their 
> {{previous}} directory.
> In summary there were two observed issues.
> - Upgrade failure with {{link()}} failing with {{EEXIST}}
> - {{previous.tmp}} contained not the content of original {{current}}, but 
> half-upgraded one.
> We did not see this in smaller scale test clusters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to