NotReplicatedYetException is handled incorrectly in DFSClient
-------------------------------------------------------------

                 Key: HADOOP-1142
                 URL: https://issues.apache.org/jira/browse/HADOOP-1142
             Project: Hadoop
          Issue Type: Bug
          Components: dfs
    Affects Versions: 0.12.1, 0.10.0, 0.9.0
            Reporter: Konstantin Shvachko
            Priority: Minor


1) Looking at DFSOutputStream.locateFollowingBlock(), which catches a 
NotReplicatedYetException
and waits hoping that somebody else will replicate unreplicated block(s).
This is not going to happen because the client itself is responsible for 
confirming written blocks
(reportWrittenBlock()) to the name-node and nobody else will do it.
During pipelining the client sends the block body to data-nodes with 
shouldReportBlock set to false.
This means that data-nodes will not report anything to the name-node.
Which in case of locateFollowingBlock() means something bad has happened like 
HADOOP-1093
and the client should fail immediately rather than re-trying.

2) Both DFSOutputStream.locateFollowingBlock() and locateNewBlock() contain 
rather senseless
and somewhat misleading double infinite loops.
Owen confirms they were introduced by HADOOP-157. We should remove them.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to