[ 
https://issues.apache.org/jira/browse/HDFS-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12971113#action_12971113
 ] 

Hairong Kuang commented on HDFS-1526:
-------------------------------------

     [exec] -1 overall.  
     [exec] 
     [exec]     +1 @author.  The patch does not contain any @author tags.
     [exec] 
     [exec]     -1 tests included.  The patch doesn't appe
     [exec] ar to include any new or modified tests.
     [exec]                         Please justify why no new tests are needed 
for this patch.
     [exec]                         Also please list what manual steps were 
performed to verify this patch.
     [exec] 
     [exec]     +1 javadoc.  The javadoc tool did not generate any warning 
messages.
     [exec] 
     [exec]     +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
     [exec] 
     [exec]     +1 findbugs.  The patch does not introduce any new Findbugs 
(version 1.3.9) warnings.
     [exec] 
     [exec]     +1 release audit.  The applied patch does not increase the 
total number of release audit warnings.
     [exec] 
     [exec]     +1 system test framework.  The patch passed system test 
framework compile

> Dfs client name for a map/reduce task should have some randomness
> -----------------------------------------------------------------
>
>                 Key: HDFS-1526
>                 URL: https://issues.apache.org/jira/browse/HDFS-1526
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>             Fix For: 0.23.0
>
>         Attachments: clientName.patch, randClientId1.patch, 
> randClientId2.patch, randClientId3.patch
>
>
> Fsck shows one of the files in our dfs cluster is corrupt.
> /bin/hadoop fsck aFile -files -blocks -locations
> aFile: 4633 bytes, 2 block(s): 
> aFile: CORRUPT block blk_-4597378336099313975
> OK
> 0. blk_-4597378336099313975_2284630101 len=0 repl=3 [...]
> 1. blk_5024052590403223424_2284630107 len=4633 repl=3 [...]Status: CORRUPT
> On disk, these two blocks are of the same size and the same content. It turns 
> out the writer of the file is from a multiple threaded map task. Each thread 
> may write to the same file. One possible interaction of two threads might 
> make this to happen:
> [T1: create aFile] [T2: delete aFile] [T2: create aFile][T1: addBlock 0 to 
> aFile][T2: addBlock1 to aFile]...
> Because T1 and T2 have the same client name, which is the map task id, the 
> above interactions could be done without any lease exception, thus eventually 
> leading to a corrupt file. To solve the problem, a mapreduce task's client 
> name could be formed by its task id followed by a random number.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to