[
https://issues.apache.org/jira/browse/HADOOP-1845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12528609
]
Konstantin Shvachko commented on HADOOP-1845:
---------------------------------------------
I am running a 2-node cluster and see this exception all the time.
Here is the log from the name-node that explains the behavior:
{code}
07/09/18 12:08:05 INFO dfs.StateChange: BLOCK* NameSystem.allocateBlock:
/Dir0/file20. blk_-4934058791921230875 is created and added to pendingCreates
and pendingCreateBlocks
07/09/18 12:08:06 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: node.x.x.111:50077 is added to blk_-4934058791921230875
07/09/18 12:08:08 INFO dfs.StateChange: BLOCK* NameSystem.pendingTransfer: ask
node.x.x.111:50077 to replicate blk_-4934058791921230875 to datanode(s)
node.x.x.222:50017
07/09/18 12:08:09 INFO dfs.StateChange: BLOCK* NameSystem.addStoredBlock:
blockMap updated: node.x.x.222:50017 is added to blk_-4934058791921230875
{code}
What we see here is a race condition between blockReceived() and
sendHeartbeat().
The client writes 2 block replicas to 2 data-nodes. When finished writing
replicas to their disks the data-nodes send blockReceived() to the name-node.
The following sequence of events causes the exception:
# node1.blockReceived(): the block is added to the list of under-replicated
blocks, since it is supposed to have replication 2;
# node1.sendHeartbeat(): node1 starts replicating block to node2.
In respond node2 throws "block is valid, and cannot be written to" exception.
# node2.blockReceived(): everything goes back to normal.
On a 2-node cluster there is always one choice where the other replica can be
placed. That is why the exception is inevitable if replication is requested.
On a bigger clusters the exception is rather rare, because replication most
probably will be scheduled to a data-node that does not contain the block.
Which makes it even worse, because the actual transfer happens, the block
becomes over-replicated so that now one of the replicas needs to be removed.
I think this decreases the overall performance of the cluster - unnecessary
transfers of large blocks can be costly.
> Datanodes get error message "is valid, and cannot be written to"
> -----------------------------------------------------------------
>
> Key: HADOOP-1845
> URL: https://issues.apache.org/jira/browse/HADOOP-1845
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.14.1
> Reporter: Hairong Kuang
> Fix For: 0.15.0
>
>
> >> Copy from dev list:
> Our cluster has 4 nodes and i set the mapred.subimt.replication parameter to
> 2 on all nodes and the master. Everything has been restarted.
> Unfortuantely, we still have the same exception :
> 2007-09-05 17:01:59,623 ERROR org.apache.hadoop.dfs.DataNode:
> DataXceiver: java.io.IOException: Block blk_-5969983648201186681 is valid,
> and cannot be written to.
> at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:515)
> at
> org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:822)
> at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727)
> at java.lang.Thread.run(Thread.java:595)
> >> end of copy
> The message shows that the namenode schedules to replicate a block to a
> datanode that already holds the block. The namenode block placement algorithm
> makes sure that it does not schedule a block to a datanode that is confirmed
> to hold a replica of the block. But it is not aware of any in-transit block
> placements (i.e. the scheduled but not confirmed block placements), so
> occasionally we may still see "is valid, and cannot be written to" errors.
> A fix to the problem is to keep track of all in-transit block placements,
> and the block placement algorithm considers these to-be-confirmed replicas
> as well.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.