[ 
https://issues.apache.org/jira/browse/HDFS-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14387466#comment-14387466
 ] 

Colin Patrick McCabe edited comment on HDFS-7996 at 4/3/15 9:13 PM:
--------------------------------------------------------------------

The {{BlockReceiver}} should either be closed, or not closed.  Having lots of 
subtly different close functions just makes it hard to think about.

It would be much better to add a function like 
{{BlockReceiver#claimReplicaHandler}} and have that function return 
{{BlockReceiver#replicaHandler}} (and set {{BlockReceiver#replicaHandler}} to 
null).  Then {{finalizeBlock}} can hold on to the {{replicaHandler}} and its 
associated volume for as long as it wants.

I also notice an existing bug in {{BlockReceiver#close}}... if there is an 
{{IOException}} closing the data or metadata streams, the volume reference will 
never be removed.  We should fix this with a finally block.  Similarly, if 
there is an IOE closing one stream, we should try to close the other stream 
rather than giving up.  Otherwise we have a file descriptor leak on IO error.  
[Edit: actually, the streams will be closed on an IOE, but not on a 
RuntimeException/]


was (Author: cmccabe):
The {{BlockReceiver}} should either be closed, or not closed.  Having lots of 
subtly different close functions just makes it hard to think about.

It would be much better to add a function like 
{{BlockReceiver#claimReplicaHandler}} and have that function return 
{{BlockReceiver#replicaHandler}} (and set {{BlockReceiver#replicaHandler}} to 
null).  Then {{finalizeBlock}} can hold on to the {{replicaHandler}} and its 
associated volume for as long as it wants.

I also notice an existing bug in {{BlockReceiver#close}}... if there is an 
{{IOException}} closing the data or metadata streams, the volume reference will 
never be removed.  We should fix this with a finally block.  Similarly, if 
there is an IOE closing one stream, we should try to close the other stream 
rather than giving up.  Otherwise we have a file descriptor leak on IO error.

> After swapping a volume, BlockReceiver reports ReplicaNotFoundException
> -----------------------------------------------------------------------
>
>                 Key: HDFS-7996
>                 URL: https://issues.apache.org/jira/browse/HDFS-7996
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>            Reporter: Lei (Eddy) Xu
>            Assignee: Lei (Eddy) Xu
>            Priority: Critical
>         Attachments: HDFS-7996.000.patch, HDFS-7996.001.patch, 
> HDFS-7996.002.patch
>
>
> When removing a disk from an actively writing DataNode, the BlockReceiver 
> working on the disk throws {{ReplicaNotFoundException}} because the replicas 
> are removed from the memory:
> {code}
> 2015-03-26 08:02:43,154 INFO 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Removed 
> volume: /data/2/dfs/dn/current
> 2015-03-26 08:02:43,163 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Removing block level storage: 
> /data/2/dfs/dn/current/BP-51301509-10.20.202.114-1427296597742
> 2015-03-26 08:02:43,163 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> IOException in BlockReceiver.run():
> org.apache.hadoop.hdfs.server.datanode.ReplicaNotFoundException: Cannot 
> append to a non-existent replica 
> BP-51301509-10.20.202.114-1427296597742:blk_1073742979_2160
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.getReplicaInfo(FsDatasetImpl.java:615)
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1362)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.finalizeBlock(BlockReceiver.java:1281)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1241)
>         at java.lang.Thread.run(Thread.java:745)
> {code}
> {{FsVolumeList#removeVolume}} waits all threads release {{FsVolumeReference}} 
> on the volume to be removed, however, in {{PacketResponder#finalizeBlock()}}, 
> it calls
> {code}
> private void finalizeBlock(long startTime) throws IOException {
>       BlockReceiver.this.close();
>       final long endTime = ClientTraceLog.isInfoEnabled() ? System.nanoTime()
>           : 0;
>       block.setNumBytes(replicaInfo.getNumBytes());
>       datanode.data.finalizeBlock(block);
> {code}
> The {{FsVolumeReference}} was released in {{BlockReceiver.this.close()}} 
> before calling {{datanode.data.finalizeBlock(block)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to