[jira] [Commented] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14171041#comment-14171041
 ] 

Hudson commented on HDFS-7236:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1926 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1926/])
HDFS-7236. Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots. 
Contributed by Yongjun Zhang. (jing9: rev 
98ac9f26c5b3bceb073ce444e42dc89d19132a1f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
> 
>
> Key: HDFS-7236
> URL: https://issues.apache.org/jira/browse/HDFS-7236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HDFS-7236.001.patch
>
>
> Per the following report
> {code}
> Recently FAILED builds in url: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
> (2014-10-11 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
> (2014-10-10 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
> ...
> Among 5 runs examined, all failed tests <#failedRuns: testName>:
> 4: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> 2: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> 2: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> 1: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> ...
> {code}
> TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
> recent two runs in trunk. Creating this jira for it (The other two tests that 
> failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
> Symptom:
> {code}
> Error Message
> Timed out waiting for Mini HDFS Cluster to start
> Stacktrace
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
> {code}
> AND
> {code}
> 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
> 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
> /127.0.0.1:32949 dst: /127.0.0.1:55303
> java.io.IOException: Premature EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
>  

[jira] [Commented] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170961#comment-14170961
 ] 

Hudson commented on HDFS-7236:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1901 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1901/])
HDFS-7236. Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots. 
Contributed by Yongjun Zhang. (jing9: rev 
98ac9f26c5b3bceb073ce444e42dc89d19132a1f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
> 
>
> Key: HDFS-7236
> URL: https://issues.apache.org/jira/browse/HDFS-7236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HDFS-7236.001.patch
>
>
> Per the following report
> {code}
> Recently FAILED builds in url: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
> (2014-10-11 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
> (2014-10-10 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
> ...
> Among 5 runs examined, all failed tests <#failedRuns: testName>:
> 4: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> 2: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> 2: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> 1: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> ...
> {code}
> TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
> recent two runs in trunk. Creating this jira for it (The other two tests that 
> failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
> Symptom:
> {code}
> Error Message
> Timed out waiting for Mini HDFS Cluster to start
> Stacktrace
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
> {code}
> AND
> {code}
> 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
> 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
> /127.0.0.1:32949 dst: /127.0.0.1:55303
> java.io.IOException: Premature EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
>   at 
> org

[jira] [Commented] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-14 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14170815#comment-14170815
 ] 

Hudson commented on HDFS-7236:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #711 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/711/])
HDFS-7236. Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots. 
Contributed by Yongjun Zhang. (jing9: rev 
98ac9f26c5b3bceb073ce444e42dc89d19132a1f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
> 
>
> Key: HDFS-7236
> URL: https://issues.apache.org/jira/browse/HDFS-7236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HDFS-7236.001.patch
>
>
> Per the following report
> {code}
> Recently FAILED builds in url: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
> (2014-10-11 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
> (2014-10-10 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
> ...
> Among 5 runs examined, all failed tests <#failedRuns: testName>:
> 4: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> 2: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> 2: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> 1: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> ...
> {code}
> TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
> recent two runs in trunk. Creating this jira for it (The other two tests that 
> failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
> Symptom:
> {code}
> Error Message
> Timed out waiting for Mini HDFS Cluster to start
> Stacktrace
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
> {code}
> AND
> {code}
> 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
> 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
> /127.0.0.1:32949 dst: /127.0.0.1:55303
> java.io.IOException: Premature EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
>   at 
> org.a

[jira] [Commented] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-13 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169647#comment-14169647
 ] 

Yongjun Zhang commented on HDFS-7236:
-

Many thanks [~jingzhao]!

FYI, I just took a look at HDFS-7226 (TestDNFencing.testQueueingWithAppend 
failed often in latest test) a bit and found that it seems to be related to 
HDFS-7217 change too. However, it's more subtle there, and it appears to have 
something to do with hflush. I will look more at that jira a bit later. 



> Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
> 
>
> Key: HDFS-7236
> URL: https://issues.apache.org/jira/browse/HDFS-7236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HDFS-7236.001.patch
>
>
> Per the following report
> {code}
> Recently FAILED builds in url: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
> (2014-10-11 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
> (2014-10-10 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
> ...
> Among 5 runs examined, all failed tests <#failedRuns: testName>:
> 4: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> 2: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> 2: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> 1: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> ...
> {code}
> TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
> recent two runs in trunk. Creating this jira for it (The other two tests that 
> failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
> Symptom:
> {code}
> Error Message
> Timed out waiting for Mini HDFS Cluster to start
> Stacktrace
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
> {code}
> AND
> {code}
> 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
> 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
> /127.0.0.1:32949 dst: /127.0.0.1:55303
> java.io.IOException: Premature EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
>   at 
> org.

[jira] [Commented] (HDFS-7236) Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots

2014-10-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14169600#comment-14169600
 ] 

Hudson commented on HDFS-7236:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6249 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6249/])
HDFS-7236. Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots. 
Contributed by Yongjun Zhang. (jing9: rev 
98ac9f26c5b3bceb073ce444e42dc89d19132a1f)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOpenFilesWithSnapshot.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


> Fix TestOpenFilesWithSnapshot#testOpenFilesWithMultipleSnapshots
> 
>
> Key: HDFS-7236
> URL: https://issues.apache.org/jira/browse/HDFS-7236
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Fix For: 2.6.0
>
> Attachments: HDFS-7236.001.patch
>
>
> Per the following report
> {code}
> Recently FAILED builds in url: 
> https://builds.apache.org/job/Hadoop-Hdfs-trunk
> THERE ARE 4 builds (out of 5) that have failed tests in the past 7 days, 
> as listed below:
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1898/testReport 
> (2014-10-11 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> ===>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1897/testReport 
> (2014-10-10 04:30:40)
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> Failed test: org.apache.hadoop.tracing.TestTracing.testReadTraceHooks
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> Failed test: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> Failed test: org.apache.hadoop.tracing.TestTracing.testWriteTraceHooks
> ...
> Among 5 runs examined, all failed tests <#failedRuns: testName>:
> 4: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication.testFencingStress
> 2: 
> org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencing.testQueueingWithAppend
> 2: 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots
> 1: 
> org.apache.hadoop.hdfs.server.namenode.TestDeadDatanode.testDeadDatanode
> ...
> {code}
> TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots failed in most 
> recent two runs in trunk. Creating this jira for it (The other two tests that 
> failed more often were reported in separate jira HDFS-7221 and HDFS-7226)
> Symptom:
> {code}
> Error Message
> Timed out waiting for Mini HDFS Cluster to start
> Stacktrace
> java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1194)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1819)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1789)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.doTestMultipleSnapshots(TestOpenFilesWithSnapshot.java:184)
>   at 
> org.apache.hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot.testOpenFilesWithMultipleSnapshots(TestOpenFilesWithSnapshot.java:162)
> {code}
> AND
> {code}
> 2014-10-11 12:38:24,385 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 
> 127.0.0.1:55303:DataXceiver error processing WRITE_BLOCK operation  src: 
> /127.0.0.1:32949 dst: /127.0.0.1:55303
> java.io.IOException: Premature EOF from inputStream
>   at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:196)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
>   at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:468)
>   at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:772)
>   at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:720)
>   at 
>