Hadoop-Hdfs-trunk - Build # 2497 - Still Failing

2015-10-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2497/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9547 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [07:42 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:02 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.123 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:09 h
[INFO] Finished at: 2015-10-31T13:33:33+00:00
[INFO] Final Memory: 56M/792M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-4937
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt

Error Message:
test timed out after 5 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 5 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:763)
at 
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:700)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:774)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:753)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at 
org.apache.hadoop.hdfs.TestCrcCorruption.testCorruptionDuringWrt(TestCrcCorruption.java:134)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling

Error Message:
test timed out after 30 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at 
org.apache.hadoop.hdfs.DataStreamer.waitAndQueuePacket(DataStreamer.java:804)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacket(DFSOutputStream.java:423)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacketFull(DFSOu

Build failed in Jenkins: Hadoop-Hdfs-trunk #2497

2015-10-31 Thread Apache Jenkins Server
See 

Changes:

[yliu] Revert "fix CHANGES.txt"

[yliu] Revert "HDFS-4937. ReplicationMonitor can infinite-loop in

--
[...truncated 9354 lines...]
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 93.46 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.37 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.2 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.547 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.555 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.153 sec - in 
org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.96 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.309 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.996 sec - in 
org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.311 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.782 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.816 sec - in 
org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.585 sec - 
in org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 33.74 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.848 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.967 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.348 sec - in 
org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.169 sec - in 
org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.119 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.648 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.293 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.709 sec - in 
org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.414 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.003 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.87 sec - in 
org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.225 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.Tes

Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #560

2015-10-31 Thread Apache Jenkins Server
See 



[jira] [Created] (HDFS-9355) Support colocation in HDFS.

2015-10-31 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-9355:


 Summary: Support colocation in HDFS.
 Key: HDFS-9355
 URL: https://issues.apache.org/jira/browse/HDFS-9355
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


Through this feature client can give suggestion to HDFS to write his all the 
blocks on same set of datanodes. Currently this we can achieve through 
HDFS-2576. HDFS-2576 give option to hint namenode about favored nodes, but in 
heterogeneous cluster this will not work out. Support client wants to write his 
data in directory which have COLD policy, but he don't know which DN have 
ARCHIVE storage, So he will not able to give favoredNodes list. 


*Implementation*

Colocation can enable by setting "dfs.colocation.enable" true in client 
configuration. If colocation is enable and  favoredNodes list is empty then 
{{DataStreamer}} will set first set of datanodes as favoredNodes which is 
chosen for first block and subsequent block will use the same datanodes for 
write. Before closing file client can get the favoredNodes list and same he can 
use for writing new file.
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9354) Fix TestBalancer#testBalancerWithZeroThreadsForMove on Windows

2015-10-31 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9354:


 Summary: Fix TestBalancer#testBalancerWithZeroThreadsForMove on 
Windows
 Key: HDFS-9354
 URL: https://issues.apache.org/jira/browse/HDFS-9354
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


This negative test expect HadoopIllegalArgumentException on illegal 
configuration. It uses JUnit (expected=HadoopIllegalArgumentException.class)  
and passed fine on Linux.

On windows, this test passes as well. But it left open handles on NN metadata 
directories used by MiniDFSCluster. As a result, quite a few of subsequent 
TestBalancer unit tests can't start MiniDFSCluster. The open handles prevents 
them from cleaning up NN metadata directories on Windows. 

This JIRA is opened to explicitly catch the Exception and ensure the test 
cluster is properly shutdown.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-4937) ReplicationMonitor can infinite-loop in BlockPlacementPolicyDefault#chooseRandom()

2015-10-31 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu reopened HDFS-4937:
--

> ReplicationMonitor can infinite-loop in 
> BlockPlacementPolicyDefault#chooseRandom()
> --
>
> Key: HDFS-4937
> URL: https://issues.apache.org/jira/browse/HDFS-4937
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.4-alpha, 0.23.8
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
>  Labels: BB2015-05-TBR
> Fix For: 3.0.0, 2.7.2
>
> Attachments: HDFS-4937.patch, HDFS-4937.v1.patch, HDFS-4937.v2.patch
>
>
> When a large number of nodes are removed by refreshing node lists, the 
> network topology is updated. If the refresh happens at the right moment, the 
> replication monitor thread may stuck in the while loop of {{chooseRandom()}}. 
> This is because the cached cluster size is used in the terminal condition 
> check of the loop. This usually happens when a block with a high replication 
> factor is being processed. Since replicas/rack is also calculated beforehand, 
> no node choice may satisfy the goodness criteria if refreshing removed racks. 
> All nodes will end up in the excluded list, but the size will still be less 
> than the cached cluster size, so it will loop infinitely. This was observed 
> in a production environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #559

2015-10-31 Thread Apache Jenkins Server
See 

Changes:

[sjlee] Updated the 2.6.2 final release date.

--
[...truncated 12874 lines...]
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

  
TestDFSStripedOutputStreamWithFailure200>TestDFSStripedOutputStreamWithFailure$TestBase.test8:497->TestDFSStripedOutputStreamWithFailure$TestBase.run:486
 failed, dn=0, length=4521983java.lang.AssertionError: expected:<1003> but 
was:<1004>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:362)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:281)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:486)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test8(TestDFSStripedOutputStreamWithFailure.java:497)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

  TestSafeModeWithStripedFile.testStripedFile0:72->doTest:151 null
  TestSafeModeWithStripedFile.testStripedFile1:77->doTest:151 null
  
TestDFSStripedOutputStreamWithFailure110>TestDFSStripedOutputStreamWithFailure$TestBase.test3:492->TestDFSStripedOutputStreamWithFailure$TestBase.run:486
 failed, dn=0, length=2424832java.io.IOException: Failed at i=4607
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:402)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:378)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:281)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:486)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test3(TestDFSStripedOutputStreamWithFailure.java:492)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
Caused by: java.io.IOException: Failed to get 6 nodes from namenode: 
blockGroupSize= 9, blocks.length= 5
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.allocateNewBlock(DFSStripedOutputStream.java:443)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:482)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
at java.io.DataOutputStream.write(DataOutputStream.java:88)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:400)
... 13 more

  
TestDFSStripedOutputStreamWit