[jira] [Updated] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7236:

Summary: Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots  
(was: TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in 
trunk)

 Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
 

 Key: HDFS-7236
 URL: https://issues.apache.org/jira/browse/HDFS-7236
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-7236.001.patch


 Per the following report
 {code}
 Recently FAILED builds in url: 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk
 THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
 (2014-10-11 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
 (2014-10-10 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
 ...
 Among 5 runs examined, all failed tests #failedRuns: testName:
 4: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 2: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 2: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 1: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 ...
 {code}
 TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
 recent two runs in trunk. Creating this jira for it (The other two tests that 
 failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
 Symptom:
 {code}
 Error Message
 Timed out waiting for Mini HDFS Cluster to start
 Stacktrace
 java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
 {code}
 AND
 {code}
 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
 /127.0.0.1:32949 dst: /127.0.0.1:55303
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 AND
 {code}
 2014-10-11 12:38:28,552 WARN  datanode.DataNode 
 (BPServiceActor.java:offerService(751)) - RemoteException in offerService
 

[jira] [Updated] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7236:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk, branch-2 and branch-2.6.0. Thanks for the 
contribution, [~yzhangal]!

 Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
 

 Key: HDFS-7236
 URL: https://issues.apache.org/jira/browse/HDFS-7236
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Fix For: 2.6.0

 Attachments: HDFS-7236.001.patch


 Per the following report
 {code}
 Recently FAILED builds in url: 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk
 THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
 (2014-10-11 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
 (2014-10-10 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
 ...
 Among 5 runs examined, all failed tests #failedRuns: testName:
 4: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 2: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 2: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 1: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 ...
 {code}
 TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
 recent two runs in trunk. Creating this jira for it (The other two tests that 
 failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
 Symptom:
 {code}
 Error Message
 Timed out waiting for Mini HDFS Cluster to start
 Stacktrace
 java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
 {code}
 AND
 {code}
 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
 /127.0.0.1:32949 dst: /127.0.0.1:55303
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 AND
 {code}
 2014-10-11 12:38:28,552 WARN  

[jira] [Updated] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7236:

Affects Version/s: 2.6.0

 Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
 

 Key: HDFS-7236
 URL: https://issues.apache.org/jira/browse/HDFS-7236
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Fix For: 2.6.0

 Attachments: HDFS-7236.001.patch


 Per the following report
 {code}
 Recently FAILED builds in url: 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk
 THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
 (2014-10-11 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
 (2014-10-10 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
 ...
 Among 5 runs examined, all failed tests #failedRuns: testName:
 4: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 2: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 2: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 1: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 ...
 {code}
 TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
 recent two runs in trunk. Creating this jira for it (The other two tests that 
 failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
 Symptom:
 {code}
 Error Message
 Timed out waiting for Mini HDFS Cluster to start
 Stacktrace
 java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
 {code}
 AND
 {code}
 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
 /127.0.0.1:32949 dst: /127.0.0.1:55303
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 AND
 {code}
 2014-10-11 12:38:28,552 WARN  datanode.DataNode 
 (BPServiceActor.java:offerService(751)) - RemoteException in offerService
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Got incremental 
 block report from 

[jira] [Updated] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-13 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7236:

Target Version/s: 2.6.0

 Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
 

 Key: HDFS-7236
 URL: https://issues.apache.org/jira/browse/HDFS-7236
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Fix For: 2.6.0

 Attachments: HDFS-7236.001.patch


 Per the following report
 {code}
 Recently FAILED builds in url: 
 https://builds.apache.org/job/Hadoop-Hdfs-trunk
 THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
 (2014-10-11 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 ===https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
 (2014-10-10 04:30:40)
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 Failed test: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
 ...
 Among 5 runs examined, all failed tests #failedRuns: testName:
 4: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
 2: 
 org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
 2: 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
 1: 
 org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
 ...
 {code}
 TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
 recent two runs in trunk. Creating this jira for it (The other two tests that 
 failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
 Symptom:
 {code}
 Error Message
 Timed out waiting for Mini HDFS Cluster to start
 Stacktrace
 java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
   at 
 org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
   at 
 org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
 {code}
 AND
 {code}
 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
 /127.0.0.1:32949 dst: /127.0.0.1:55303
 java.io.IOException: Premature EOF from inputStream
   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
   at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
   at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
   at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
   at java.lang.Thread.run(Thread.java:662)
 {code}
 AND
 {code}
 2014-10-11 12:38:28,552 WARN  datanode.DataNode 
 (BPServiceActor.java:offerService(751)) - RemoteException in offerService
 org.apache.hadoop.ipc.RemoteException(java.io.IOException): Got incremental 
 block report from unregistered