[
https://issues.apache.org/jira/browse/HDFS-1526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hairong Kuang updated HDFS-1526:
--------------------------------
Description:
Fsck shows one of the files in our dfs cluster is corrupt.
/bin/hadoop fsck aFile -files -blocks -locations
aFile: 4633 bytes, 2 block(s):
aFile: CORRUPT block blk_-4597378336099313975
OK
0. blk_-4597378336099313975_2284630101 len=0 repl=3 [...]
1. blk_5024052590403223424_2284630107 len=4633 repl=3 [...]Status: CORRUPT
On disk, these two blocks are of the same size and the same content. It turns
out the writer of the file is from a multiple threaded map task. Each thread
may write to the same file. One possible interaction of two threads might make
this to happen:
[T1: create aFile] [T2: delete aFile] [T2: create aFile][T1: addBlock 0 to
aFile][T2: addBlock1 to aFile]...
Because T1 and T2 have the same client name, which is the map task id, the
above interactions could be done without any lease exception, thus eventually
leading to a corrupt file. To solve the problem, a mapreduce task's client name
could be formed by its task id followed by a random number.
was:
Fsck shows one of the files in our dfs cluster is corrupt.
# /bin/hadoop fsck aFile -files -blocks -locations
aFile: 4633 bytes, 2 block(s):
aFile: CORRUPT block blk_-4597378336099313975
OK
0. blk_-4597378336099313975_2284630101 len=0 repl=3 [...]
1. blk_5024052590403223424_2284630107 len=4633 repl=3 [...]Status: CORRUPT
On disk, these two blocks are of the same size and the same content. It turns
out the writer of the file is from a multiple threaded map task. Each thread
may write to the same file. One possible interaction of two threads might make
this to happen:
[T1: create aFile] [T2: delete aFile] [T2: create aFile][T1: addBlock 0 to
aFile][T2: addBlock1 to aFile]...
Because T1 and T2 have the same client name, which is the map task id, the
above interactions could be done without any lease exception, thus eventually
leading to a corrupt file. To solve the problem, a mapreduce task's client name
could be formed by its task id followed by a random number.
> Dfs client name for a map/reduce task should have some randomness
> -----------------------------------------------------------------
>
> Key: HDFS-1526
> URL: https://issues.apache.org/jira/browse/HDFS-1526
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs client
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Fix For: 0.23.0
>
> Attachments: clientName.patch, randClientId1.patch,
> randClientId2.patch
>
>
> Fsck shows one of the files in our dfs cluster is corrupt.
> /bin/hadoop fsck aFile -files -blocks -locations
> aFile: 4633 bytes, 2 block(s):
> aFile: CORRUPT block blk_-4597378336099313975
> OK
> 0. blk_-4597378336099313975_2284630101 len=0 repl=3 [...]
> 1. blk_5024052590403223424_2284630107 len=4633 repl=3 [...]Status: CORRUPT
> On disk, these two blocks are of the same size and the same content. It turns
> out the writer of the file is from a multiple threaded map task. Each thread
> may write to the same file. One possible interaction of two threads might
> make this to happen:
> [T1: create aFile] [T2: delete aFile] [T2: create aFile][T1: addBlock 0 to
> aFile][T2: addBlock1 to aFile]...
> Because T1 and T2 have the same client name, which is the map task id, the
> above interactions could be done without any lease exception, thus eventually
> leading to a corrupt file. To solve the problem, a mapreduce task's client
> name could be formed by its task id followed by a random number.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.