Jenkins build is back to normal : Hadoop-Hdfs-trunk #3137

2016-05-12 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Hadoop-Hdfs-trunk-Java8 - Build # 1203 - Still Failing

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1203/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7281 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:06 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:01 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.102 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:05 h
[INFO] Finished at: 2016-05-13T03:01:56+00:00
[INFO] Final Memory: 59M/1055M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncAPI

Error Message:
test timed out after 6 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1477)
at org.apache.hadoop.ipc.Client.call(Client.java:1436)
at org.apache.hadoop.ipc.Client.call(Client.java:1358)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:241)
at com.sun.proxy.$Proxy19.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:797)
at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:257)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
at com.sun.proxy.$Proxy23.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSCl

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1203

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[wang] HADOOP-13142. Change project version from 3.0.0 to 3.0.0-alpha1.

--
[...truncated 7088 lines...]
path '/home/jenkins/jenkins-slave': 
absolute:/home/jenkins/jenkins-slave
permissions: drwx
path '/home/jenkins': 
absolute:/home/jenkins
permissions: drwx
path '/home': 
absolute:/home
permissions: dr-x
path '/': 
absolute:/
permissions: dr-x

at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:835)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncAPI(TestAsyncDFSRename.java:313)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.testConservativeConcurrentAsyncAPI(TestAsyncDFSRename.java:284)

"refreshUsed-
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.616 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.79 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.231 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.776 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.78 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.02 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.333 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.942 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.567 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.422 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.48 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.143 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apac

Hadoop-Hdfs-trunk-Java8 - Build # 1202 - Still Failing

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1202/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9132 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:37 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:07 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.101 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:12 h
[INFO] Finished at: 2016-05-13T01:25:59+00:00
[INFO] Final Memory: 70M/878M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
4 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect 
txid expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs(TestEditLog.java:594)


FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:55386,DS-5edeebf2-d758-408e-b9ea-dfd03ef2db60,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:47556,DS-b4bcfc89-1628-4302-9cf3-b15a3f7e0ce9,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:47556,DS-b4bcfc89-1628-4302-9cf3-b15a3f7e0ce9,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:55386,DS-5edeebf2-d758-408e-b9ea-dfd03ef2db60,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure thi

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1202

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[wang] Revert "Update project version to 3.0.0-alpha1-SNAPSHOT."

[lei] HDFS-9389. Add maintenance states to AdminStates. (Ming Ma via lei)

--
[...truncated 8939 lines...]
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.991 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.109 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.237 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.029 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.726 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.587 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.751 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.884 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.312 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.69 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.154 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.628 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 207.584 sec - 
in org.apache.hadoop.hdfs.TestAsyncDFSRename
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.583 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.353 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 6, Failures: 0, Errors: 0

Hadoop-Hdfs-trunk - Build # 3136 - Still Failing

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3136/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6395 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:06 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:00 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.100 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:04 h
[INFO] Finished at: 2016-05-13T01:12:36+00:00
[INFO] Final Memory: 57M/730M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs[1]

Error Message:
logging edit without syncing should do not affect txid expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: logging edit without syncing should do not affect 
txid expected:<1> but was:<2>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.server.namenode.TestEditLog.testBatchedSyncWithClosedLogs(TestEditLog.java:594)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys.testServiceRpcBindHostKey

Error Message:
Problem binding to [0.0.0.0:49908] java.net.BindException: Address already in 
use; For more details see:  http://wiki.apache.org/hadoop/BindException

Stack Trace:
java.net.BindException: Problem binding to [0.0.0.0:49908] 
java.net.BindException: Address already in use; For more details see:  
http://wiki.apache.org/hadoop/BindException
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
 

Build failed in Jenkins: Hadoop-Hdfs-trunk #3136

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[wang] Revert "Update project version to 3.0.0-alpha1-SNAPSHOT."

[lei] HDFS-9389. Add maintenance states to AdminStates. (Ming Ma via lei)

--
[...truncated 6202 lines...]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"IPC Server handler 5 on 48072" daemon prio=5 tid=152 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"VolumeScannerThread(
 daemon prio=5 tid=162 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"CacheReplicationMonitor(979110695)"  prio=5 tid=6291 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2176)
at 
org.apache.hadoop.hdfs.server.blockmanagement.CacheReplicationMonitor.run(CacheReplicationMonitor.java:182)
"IPC Server listener on 48072" daemon prio=5 tid=139 runnable
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:87)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:98)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:102)
at org.apache.hadoop.ipc.Server$Listener.run(Server.java:901)
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 72.568 sec - in 
org.apache.hadoop.hdfs.TestAclsEndToEnd
Running org.apache.hadoop.hdfs.TestGetFileChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.033 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.324 sec - in 
org.apache.hadoop.hdfs.TestGetFileChecksum
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.895 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.326 sec - 
in org.apache.hadoop.hdfs.TestPread
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.763 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.582 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.349 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.888 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.091 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.039 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.447 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hd

Re: [VOTE] Merge feature branch HADOOP-12930

2016-05-12 Thread Andrew Wang
+1. I looked at the patches on the branch, wasn't too bad to review. As
Allen said, there's some code movement, assorted other nice doc and shell
fixups.

Found one extra typo, which I added to HADOOP-13129.

Best,
Andrew

On Wed, May 11, 2016 at 1:14 AM, Sean Busbey  wrote:

> +1 (non-binding)
>
> reviewed everything, filed an additional subtask for a very trivial
> typo in the docs. should be fine to make a full issue after close and
> then fix.
>
> tried merging locally, tried running through new shell tests (both
> with and without bats installed), tried making an example custom
> command (valid and malformed). everything looks great.
>
> On Mon, May 9, 2016 at 1:26 PM, Allen Wittenauer  wrote:
> >
> > Hey gang!
> >
> > I’d like to call a vote to run for 7 days (ending May 16 at
> 13:30 PT) to merge the HADOOP-12930 feature branch into trunk. This branch
> was developed exclusively by me as per the discussion two months ago as a
> way to make what would be a rather large patch hopefully easier to review.
> The vast majority of the branch is code movement in the same file,
> additional license headers, maven assembly hooks for distribution, and
> variable renames. Not a whole lot of new code, but a big diff file
> none-the-less.
> >
> > This branch modifies the ‘hadoop’, ‘hdfs’, ‘mapred’, and ‘yarn’
> commands to allow for subcommands to be added or modified at runtime.  This
> allows for individual users or entire sites to tweak the execution
> environment to suit their local needs.  For example, it has been a practice
> for some locations to change the distcp jar out for a custom one.  Using
> this functionality, it is possible that the ‘hadoop distcp’ command could
> run the local version without overwriting the bundled jar and for existing
> documentation (read: results from Internet searches) to work as written
> without modification. This has the potential to be a huge win, especially
> for:
> >
> > * advanced end users looking to supplement the Apache
> Hadoop experience
> > * operations teams that may be able to leverage existing
> documentation without having to remain local “exception” docs
> > * development groups wanting an easy way to trial
> experimental features
> >
> > Additionally, this branch includes the following, related
> changes:
> >
> > * Adds the first unit tests for the ‘hadoop’ command
> > * Adds the infrastructure for hdfs script testing and
> the first unit test for the ‘hdfs’ command
> > * Modifies the hadoop-tools components to be dynamic
> rather than hard coded
> > * Renames the shell profiles for hdfs, mapred, and yarn
> to be consistent with other bundled profiles, including the ones introduced
> in this branch
> >
> > Documentation, including a ‘hello world’-style example, is in
> the UnixShellGuide markdown file.  (Of course!)
> >
> >  I am at ApacheCon this week if anyone wants to discuss in-depth.
> >
> > Thanks!
> >
> > P.S.,
> >
> > There are still two open sub-tasks.  These are blocked by other
> issues so that we may add unit testing to the shell code in those
> respective areas.  I’ll covert to full issues after HADOOP-12930 is closed.
> >
> >
> > -
> > To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> >
>
>
>
> --
> busbey
>
> -
> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>
>


Re: Looking to a Hadoop 3 release

2016-05-12 Thread Karthik Kambatla
I am with Vinod on avoiding merging mostly_complete_branches to trunk since
we are not shipping any release off it. If 3.x releases going off of trunk
is going to help with this, I am fine with that approach. We should still
make sure to keep trunk-incompat small and not include large features.

On Sat, Apr 23, 2016 at 6:53 PM, Chris Douglas  wrote:

> If we're not starting branch-3/trunk, what would distinguish it from
> trunk/trunk-incompat? Is it the same mechanism with different labels?
>
> That may be a reasonable strategy when we create branch-3, as a
> release branch for beta. Releasing 3.x from trunk will help us figure
> out which incompatibilities can be called out in an upgrade guide
> (e.g., "new feature X is incompatible with uncommon configuration Y")
> and which require code changes (e.g., "data loss upgrading a cluster
> with feature X"). Given how long trunk has been unreleased, we need
> more data from deployments to triage. How to manage transitions
> between major versions will always be case-by-case; consensus on how
> we'll address generic incompatible changes is not saving any work.
>
> Once created, removing functionality from branch-3 (leaving it in
> trunk) _because_ nobody volunteers cycles to address urgent
> compatibility issues is fair. It's also more workable than asking that
> features be committed to a branch that we have no plan to release,
> even as alpha. -C
>
> On Fri, Apr 22, 2016 at 6:50 PM, Vinod Kumar Vavilapalli
>  wrote:
> > Tx for your replies, Andrew.
> >
> >>> For exit criteria, how about we time box it? My plan was to do monthly
> >> alphas through the summer, leading up to beta in late August / early
> Sep.
> >> At that point we freeze and stabilize for GA in Nov/Dec.
> >
> >
> > Time-boxing is a reasonable exit-criterion.
> >
> >
> >> In this case, does trunk-incompat essentially become the new trunk? Or
> are
> >> we treating trunk-incompat as a feature branch, which periodically
> merges
> >> changes from trunk?
> >
> >
> > It’s the later. Essentially
> >  - trunk-incompat = trunk + only incompatible changes, periodically kept
> up-to-date to trunk
> >  - trunk is always ready to ship
> >  - and no compatible code gets left behind
> >
> > The reason for my proposal like this is to address the tension between
> “there is lot of compatible code in trunk that we are not shipping” and
> “don’t ship trunk, it has incompatibilities”. With this, we will not have
> (compatible) code not getting shipped to users.
> >
> > Obviously, we can forget about all of my proposal completely if everyone
> puts in all compatible code into branch-2 / branch-3 or whatever the main
> releasable branch is. This didn’t work in practice, have seen this not
> happening prominently during 0.21, and now 3.x.
> >
> > There is another related issue - "my feature is nearly ready, so I’ll
> just merge it into trunk as we don’t release that anyways, but not the
> current releasable branch - I’m lazy to fix the last few stability related
> issues”. With this, we will (should) get more disciplined, take feature
> stability on a branch seriously and merge a feature branch only when it is
> truly ready!
> >
> >> For 3.x, my strawman was to release off trunk for the alphas, then
> branch a
> >> branch-3 for the beta and onwards.
> >
> >
> > Repeating above, I’m proposing continuing to make GA 3.x releases also
> off of trunk! This way only incompatible changes don’t get shipped to users
> - by design! Eventually, trunk-incompat will be latest 3.x GA + enough
> incompatible code to warrant a 4.x, 5.x etc.
> >
> > +Vinod
>


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1201

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-5053. More informative diagnostics when applications killed by a

[aw] HADOOP-12581. ShellBasedIdMapping needs suport for Solaris (Alan

--
[...truncated 29424 lines...]
  TestDistributedFileSystem.testAllWithNoXmlDefaults:1021->testDFSClose:177 » IO
  
TestAsyncDFSRename.testConservativeConcurrentAsyncRenameWithOverwrite:192->internalTestConcurrentAsyncRenameWithOverwrite:210
 » IO
  TestPread.testHedgedReadLoopTooManyTimes:322 » IO Timed out waiting for Mini 
H...
  
TestDistributedFileSystem.testRemoteRackOfFirstDegreeReadStatistics:811->testReadFileSystemStatistics:833
 » IO
  TestFileAppend2.testSimpleAppend2:233 » IO Timed out waiting for Mini HDFS 
Clu...
  TestAsyncDFSRename.testAsyncRenameWithOverwrite:70 » IO Timed out waiting for 
...
  TestPread.testPreadDFS:256->dfsPreadTest:456 » IO Timed out waiting for Mini 
H...
  TestDistributedFileSystem.testCreateWithCustomChecksum:1103 » IO Timed out 
wai...
  TestFileAppend2.testSimpleAppend:84 » IO Timed out waiting for Mini HDFS 
Clust...
  TestPread.testPreadDFSSimulated:474->testPreadDFS:256->dfsPreadTest:456 » IO 
T...
  TestCrcCorruption.testCorruptionDuringWrt:97 » IO Timed out waiting for Mini 
H...
  TestDistributedFileSystem.testFileCloseStatus:1143 » IO Timed out waiting for 
...
  TestFileAppend2.testAppendLessThanChecksumChunk:554 » IO Timed out waiting 
for...
  TestPread.testHedgedPreadDFSBasic:278->dfsPreadTest:456 » IO Timed out 
waiting...
  TestCrcCorruption.testCrcCorruption:233->thistest:161 » IO Timed out waiting 
f...
  TestFileAppend2.testComplexAppend:538->testComplexAppend:491 » IO Timed out 
wa...
  TestGetFileChecksum.setUp:46 » IO Timed out waiting for Mini HDFS Cluster to 
s...
  TestLeaseRecovery2.startUp:101 » IO Timed out waiting for Mini HDFS Cluster 
to...
  
TestCrcCorruption.testEntirelyCorruptFileThreeNodes:268->doTestEntirelyCorruptFile:279
 » IO
  TestFileAppend2.testComplexAppend2:543->testComplexAppend:491 » IO Timed out 
w...
  TestGetFileChecksum.setUp:46 » IO Timed out waiting for Mini HDFS Cluster to 
s...
  
TestCrcCorruption.testEntirelyCorruptFileOneNode:255->doTestEntirelyCorruptFile:279
 » IO
  TestLeaseRecovery2.startUp:101 » IO Timed out waiting for Mini HDFS Cluster 
to...
  TestWriteConfigurationToDFS.testWriteConf:39 » IO Timed out waiting for Mini 
H...
  TestLeaseRecovery2.startUp:101 » IO Timed out waiting for Mini HDFS Cluster 
to...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestLeaseRecovery2.startUp:101 » IO Timed out waiting for Mini HDFS Cluster 
to...
  TestListFilesInDFS.testSetUp:42 » IO Timed out waiting for Mini HDFS Cluster 
t...
  TestRollingUpgrade.testDFSAdminDatanodeUpgradeControlCommands:396 » IO Timed 
o...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestLeaseRecovery2.startUp:101 » IO Timed out waiting for Mini HDFS Cluster 
to...
  
TestParallelShortCircuitReadUnCached.setupCluster:66->TestParallelReadUtil.setupCluster:71
 » IO
  
TestParallelShortCircuitReadUnCached.teardownCluster:78->TestParallelReadUtil.teardownCluster:394
 » NullPointer
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestRollingUpgrade.testRollback:310 » IO Timed out waiting for Mini HDFS 
Clust...
  TestLeaseRecovery2.startUp:101 » IO Timed out waiting for Mini HDFS Cluster 
to...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestLeaseRecovery2.startUp:101 » IO Timed out waiting for Mini HDFS Cluster 
to...
  TestRollingUpgrade.testCheckpointWithSNN:654 » IO Timed out waiting for Mini 
H...
  TestStoragePolicyCommands.clusterSetUp:48 » IO Timed out waiting for Mini 
HDFS...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestStoragePolicyCommands.clusterSetUp:48 » IO Timed out waiting for Mini 
HDFS...
  TestDFSAdmin.setUp:75->restartCluster:92 » IO Timed out waiting for Mini HDFS 
...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestDFSAdmin.setUp:75->restartCluster:92 » IO Timed out waiting for Mini HDFS 
...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestOfflineImageViewerForXAttr.createOriginalFSImage:74 » IO Timed out 
waiting...
  TestOfflineImageViewerWithStripedBlocks.setup:61 » IO Timed out waiting for 
Mi...
  TestDFSAdmin.setUp:75->restartCluster:92 » IO Timed out waiting for Mini HDFS 
...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestOfflineImageViewer.createOriginalFSImage:118 » IO Timed out waiting for 
Mi...
  TestOfflineImageViewerForContentSummary.createOriginalFSImage:68 » IO Timed 
ou...
  TestDFSAdmin.setUp:75->restartCluster:92 » IO Timed out waiting for Mini HDFS 
...
  TestOfflineImageViewerForAcl.createOriginalF

Build failed in Jenkins: Hadoop-Hdfs-trunk #3135

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[jlowe] YARN-5053. More informative diagnostics when applications killed by a

[aw] HADOOP-12581. ShellBasedIdMapping needs suport for Solaris (Alan

--
[...truncated 28044 lines...]
  TestDistributedFileSystem.testAllWithNoXmlDefaults:1021->testDFSClose:177 » IO
  TestPread.testMaxOutHedgedReadPool:381 » IO Timed out waiting for Mini HDFS 
Cl...
  TestAsyncDFSRename.testAsyncRenameWithOverwrite:69 » IO Timed out waiting for 
...
  
TestDistributedFileSystem.testRemoteRackOfFirstDegreeReadStatistics:811->testReadFileSystemStatistics:832
 » IO
  TestFileAppend2.testSimpleAppend2:233 » IO Timed out waiting for Mini HDFS 
Clu...
  TestPread.testHedgedReadLoopTooManyTimes:321 » IO Timed out waiting for Mini 
H...
  TestDistributedFileSystem.testCreateWithCustomChecksum:1103 » IO Timed out 
wai...
  TestFileAppend2.testSimpleAppend:84 » IO Timed out waiting for Mini HDFS 
Clust...
  TestCrcCorruption.testCorruptionDuringWrt:97 » IO Timed out waiting for Mini 
H...
  TestPread.testPreadDFS:256->dfsPreadTest:456 » IO Timed out waiting for Mini 
H...
  TestDistributedFileSystem.testFileCloseStatus:1143 » IO Timed out waiting for 
...
  TestFileAppend2.testAppendLessThanChecksumChunk:553 » IO Timed out waiting 
for...
  TestCrcCorruption.testCrcCorruption:233->thistest:161 » IO Timed out waiting 
f...
  TestPread.testPreadDFSSimulated:474->testPreadDFS:256->dfsPreadTest:456 » IO 
T...
  TestFileAppend2.testComplexAppend:538->testComplexAppend:489 » IO Timed out 
wa...
  TestGetFileChecksum.setUp:45 » IO Timed out waiting for Mini HDFS Cluster to 
s...
  
TestCrcCorruption.testEntirelyCorruptFileThreeNodes:268->doTestEntirelyCorruptFile:279
 » IO
  TestPread.testHedgedPreadDFSBasic:278->dfsPreadTest:456 » IO Timed out 
waiting...
  TestFileAppend2.testComplexAppend2:543->testComplexAppend:489 » IO Timed out 
w...
  TestGetFileChecksum.setUp:45 » IO Timed out waiting for Mini HDFS Cluster to 
s...
  
TestCrcCorruption.testEntirelyCorruptFileOneNode:255->doTestEntirelyCorruptFile:279
 » IO
  TestLeaseRecovery2.startUp:98 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestWriteConfigurationToDFS.testWriteConf:39 » IO Timed out waiting for Mini 
H...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestLeaseRecovery2.startUp:98 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestListFilesInDFS.testSetUp:42 » IO Timed out waiting for Mini HDFS Cluster 
t...
  TestLeaseRecovery2.startUp:98 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestRollingUpgrade.testDFSAdminDatanodeUpgradeControlCommands:396 » IO Timed 
o...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  
TestParallelShortCircuitReadUnCached.setupCluster:66->TestParallelReadUtil.setupCluster:71
 » IO
  
TestParallelShortCircuitReadUnCached.teardownCluster:78->TestParallelReadUtil.teardownCluster:394
 » NullPointer
  TestLeaseRecovery2.startUp:98 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestRollingUpgrade.testRollback:310 » IO Timed out waiting for Mini HDFS 
Clust...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestLeaseRecovery2.startUp:98 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestRollingUpgrade.testCheckpointWithSNN:654 » IO Timed out waiting for Mini 
H...
  TestLeaseRecovery2.startUp:98 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestStoragePolicyCommands.clusterSetUp:48 » IO Timed out waiting for Mini 
HDFS...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestLeaseRecovery2.startUp:98 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestStoragePolicyCommands.clusterSetUp:48 » IO Timed out waiting for Mini 
HDFS...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestReservedRawPaths.setup:80 » IO Timed out waiting for Mini HDFS Cluster to 
...
  TestOfflineImageViewerForXAttr.createOriginalFSImage:74 » IO Timed out 
waiting...
  TestDFSAdmin.setUp:75->restartCluster:92 » IO Timed out waiting for Mini HDFS 
...
  TestOfflineImageViewerWithStripedBlocks.setup:61 » IO Timed out waiting for 
Mi...
  TestDFSAdmin.setUp:75->restartCluster:92 » IO Timed out waiting for Mini HDFS 
...
  TestOfflineImageViewer.createOriginalFSImage:118 » IO Timed out waiting for 
Mi...
  TestOfflineImageViewerForContentSummary.createOriginalFSImage:68 » IO Timed 
ou...
  TestOfflineImageViewerForAcl.createOriginalFSImage:102 » IO Timed out waiting 
...
  TestDFSAdmin.setUp:75->restartCluster:92 » IO Timed out waiting for Mini HDFS 
...
  TestDebugAdmin.setUp:51 » IO Timed out waiting for Mini HDFS Cluster to start
  TestDFSAdmin.setUp:7

[jira] [Created] (HDFS-10398) Update NN/DN min software version to 3.0.0-beta1

2016-05-12 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-10398:
--

 Summary: Update NN/DN min software version to 3.0.0-beta1
 Key: HDFS-10398
 URL: https://issues.apache.org/jira/browse/HDFS-10398
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0-beta1
Reporter: Andrew Wang
Priority: Blocker


Before we release the first 3.0.0 beta, we need to update the min software 
version to exclude the alpha releases since we will not support alpha -> beta 
compatibility. beta->GA compatibility will work though.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10397) Distcp should ignore -delete option if -diff option is provided instead of exiting

2016-05-12 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-10397:


 Summary: Distcp should ignore -delete option if -diff option is 
provided instead of exiting
 Key: HDFS-10397
 URL: https://issues.apache.org/jira/browse/HDFS-10397
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Mingliang Liu
Assignee: Mingliang Liu


In distcp, {{-delete}} and {{-diff}} options are mutually exclusive. 
[HDFS-8828] brought strictly checking which makes the existing applications (or 
scripts) that work just fine with both {{-delete}} and {{-diff}} options 
previously stop performing because of the {{java.lang.IllegalArgumentException: 
Diff is valid only with update options}} exception.

To make it backward incompatible, we can ignore the {{-delete}} option, given 
{{-diff}} option, instead of exiting the program. Along with that, we can print 
a warning message saying that _Diff is valid only with update options, and 
-delete option is ignored_.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10396) Using -diff option with DistCp may get "Comparison method violates its general contract" exception

2016-05-12 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HDFS-10396:


 Summary: Using -diff option with DistCp may get "Comparison method 
violates its general contract" exception
 Key: HDFS-10396
 URL: https://issues.apache.org/jira/browse/HDFS-10396
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang


Using -diff option get the following exception due to a bug in the comparison 
operator:

{code}
16/04/21 14:34:18 WARN tools.DistCp: Failed to use snapshot diff for distcp
java.lang.IllegalArgumentException: Comparison method violates its general 
contract!
at java.util.TimSort.mergeHi(TimSort.java:868)
at java.util.TimSort.mergeAt(TimSort.java:485)
at java.util.TimSort.mergeForceCollapse(TimSort.java:426)
at java.util.TimSort.sort(TimSort.java:223)
at java.util.TimSort.sort(TimSort.java:173)
at java.util.Arrays.sort(Arrays.java:659)
at org.apache.hadoop.tools.DistCpSync.moveToTarget(DistCpSync.java:293)
at org.apache.hadoop.tools.DistCpSync.syncDiff(DistCpSync.java:261)
at org.apache.hadoop.tools.DistCpSync.sync(DistCpSync.java:131)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:163)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:122)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:432)
16/04/21 14:34:18 ERROR tools.DistCp: Exception encountered 

{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1200

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[sjlee] YARN-4577. Enable aux services to have their own custom classpath/jar

[stevel] MAPREDUCE-6639 Process hangs in LocatedFileStatusFetcher if

[wang] Update project version to 3.0.0-alpha1-SNAPSHOT.

[stevel] HADOOP-13028 add low level counter metrics for S3A; use in read

--
[...truncated 29263 lines...]
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  
TestCrcCorruption.testEntirelyCorruptFileOneNode:255->doTestEntirelyCorruptFile:279
 » IO
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestSetrepDecreasing.testSetrepDecreasing:27 » IO Timed out waiting for Mini 
H...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestRead.testEOFWithBlockReaderLocal:64 » IO Timed out waiting for Mini HDFS 
C...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestRead.testEOFWithRemoteBlockReader:79 » IO Timed out waiting for Mini HDFS 
...
  TestWriteBlockGetsBlockLengthHint.blockLengthHintIsPropagated:52 » IO Timed 
ou...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestLocalDFS.testWorkingDirectory:72 » IO Timed out waiting for Mini HDFS 
Clus...
  TestRead.testReadReservedPath:95 » IO Timed out waiting for Mini HDFS Cluster 
...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestBlocksScheduledCounter.testBlocksScheduledCounter:56 » IO Timed out 
waitin...
  TestLocalDFS.testHomeDirectory:115 » IO Timed out waiting for Mini HDFS 
Cluste...
  TestApplyingStoragePolicy.clusterSetUp:45 » IO Timed out waiting for Mini 
HDFS...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  
TestBlocksScheduledCounter.testScheduledBlocksCounterShouldDecrementOnAbandonBlock:89
 » IO
  TestSetrepIncreasing.testSetRepWithStoragePolicyOnEmptyFile:91 » IO Timed out 
...
  TestApplyingStoragePolicy.clusterSetUp:45 » IO Timed out waiting for Mini 
HDFS...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestDecommission.testDeadNodeCountAfterNamenodeRestart:936->startCluster:335 
» IO
  TestSetrepIncreasing.testSetrepIncreasing:80->setrep:43 » IO Timed out 
waiting...
  TestApplyingStoragePolicy.clusterSetUp:45 » IO Timed out waiting for Mini 
HDFS...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  
TestDecommission.testNodeUsageAfterDecommissioned:1216->nodeUsageVerification:1299->cleanupFile:227
 NullPointer
  TestSetrepIncreasing.testSetrepIncreasingSimulatedStorage:84->setrep:43 » IO 
T...
  TestApplyingStoragePolicy.clusterSetUp:45 » IO Timed out waiting for Mini 
HDFS...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestDecommission.testRecommission:633->startCluster:335 » IO Timed out 
waiting...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestMultiThreadedHflush.testHflushWhileClosing:154 » IO Timed out waiting for 
...
  TestMissingBlocksAlert.testMissingBlocksAlert:68 » IO Timed out waiting for 
Mi...
  
TestDecommission.testClusterStatsFederation:722->testClusterStats:729->startCluster:335
 » IO
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  
TestMultiThreadedHflush.testMultipleHflushersRepl1:117->doTestMultipleHflushers:129
 » IO
  TestDecommission.testCountOnDecommissionedNodeList:1205 NullPointer
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage:628->upgradeAndVerify:594 
» IO
  
TestMultiThreadedHflush.testMultipleHflushersRepl3:122->doTestMultipleHflushers:129
 » IO
  TestDecommission.testUsedCapacity:1309->startCluster:335 » IO Timed out 
waitin...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestDFSUpgradeFromImage.testUpgradeFromRel22Image:297->upgradeAndVerify:594 » 
IO
  TestFileStatus.testSetUp:69 » IO Timed out waiting for Mini HDFS Cluster to 
st...
  TestDecommission.testDecommissionOnStandby:467 » IO Timed out waiting for 
Mini...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestBalancerBandwidth.testBalancerBandwidth:58 » IO Timed out waiting for 
Mini...
  TestDecommission.testDecommissionWithNamenodeRestart:889->startCluster:335 » 
IO
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestDecommission.testHostsFile:775->testHostsFile:794 » IO Timed out waiting 
f...
  TestSetTimes.testTimes:103 » IO Timed out waiting for Mini HDFS Cluster to 
sta...
  TestHDFSFileSystemContract.setUp:39 » IO Timed out waiting for Mini HDFS 
Clust...
  TestGenericRefresh.setUpBeforeClass:60 » IO Timed out waiting for Mini HDFS 
Cl.

Hadoop-Hdfs-trunk - Build # 3134 - Still Failing

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3134/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 18038 lines...]
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag.initCluster(TestAclConfigFlag.java:167)
at 
org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag.testModifyAclEntries(TestAclConfigFlag.java:67)

testEditLog(org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag)  Time 
elapsed: 12.031 sec  <<< ERROR!
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:848)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag.initCluster(TestAclConfigFlag.java:167)
at 
org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag.testEditLog(TestAclConfigFlag.java:120)

testGetAclStatus(org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag)  
Time elapsed: 11.858 sec  <<< ERROR!
java.io.IOException: Timed out waiting for Mini HDFS Cluster to start
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitClusterUp(MiniDFSCluster.java:1345)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:848)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag.initCluster(TestAclConfigFlag.java:167)
at 
org.apache.hadoop.hdfs.server.namenode.TestAclConfigFlag.testGetAclStatus(TestAclConfigFlag.java:111)

Running org.apache.hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate
Running org.apache.hadoop.hdfs.server.namenode.TestQuotaWithStripedBlocks
Slave went offline during the build
ERROR: Connection was broken: java.io.IOException: Unexpected termination of 
the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)

Build step 'Execute shell' marked build as failure
ERROR: Step ?Archive the artifacts? failed: no workspace for Hadoop-Hdfs-trunk 
#3134
ERROR: Step ?Publish JUnit test result report? failed: no workspace for 
Hadoop-Hdfs-trunk #3134
ERROR: Build step failed with exception
java.lang.NullPointerException
at 
hudson.plugins.violations.ViolationsPublisher.perform(ViolationsPublisher.java:74)
at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:782)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:723)
at hudson.model.Build$BuildExecution.post2(Build.java:185)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:668)
at hudson.model.Run.execute(Run.java:1763)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Build step 'Report Violations' marked build as failure
ERROR: Step ?E-mail Notification? failed: no workspace for Hadoop-Hdfs-trunk 
#3134
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: H9 is offline; cannot locate JDK 1.7 (latest)
ERROR: H9 is offline; cannot locate JDK 1.7 (latest)




###
## FAILED TESTS (if any) 
##
No tests ran.

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-

[jira] [Created] (HDFS-10395) GlobalStorageStatistics should check null FileSystem scheme to avoid NPE

2016-05-12 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-10395:


 Summary: GlobalStorageStatistics should check null FileSystem 
scheme to avoid NPE
 Key: HDFS-10395
 URL: https://issues.apache.org/jira/browse/HDFS-10395
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs
Affects Versions: 2.8.0
Reporter: Mingliang Liu






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Build failed in Jenkins: Hadoop-Hdfs-trunk #3133

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13116 Jets3tNativeS3FileSystemContractTest does not run.

--
[...truncated 8198 lines...]
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.531 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.468 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Running org.apache.hadoop.hdfs.TestDFSStartupVersions
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.91 sec - in 
org.apache.hadoop.hdfs.TestFileConcurrentReader
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.568 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.539 sec - in 
org.apache.hadoop.hdfs.TestDFSStartupVersions
Running org.apache.hadoop.hdfs.TestRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.868 sec - 
in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.832 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Running org.apache.hadoop.hdfs.TestDFSRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.039 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 14.371 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.044 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.737 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.181 sec - in 
org.apache.hadoop.hdfs.TestDFSRollback
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.178 sec - in 
org.apache.hadoop.hdfs.TestFileStatusWithECPolicy
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.068 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.511 sec - in 
org.apache.hadoop.hdfs.protocol.TestLayoutVersion
Running org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.81 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.TestPacketReceiver
Running org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 128.263 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180
Running org.apache.hadoop.hdfs.protocol.TestAnnotations
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.31 sec - in 
org.apache.hadoop.hdfs.protocol.TestAnnotations
Running org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.588 sec - in 
org.apache.hadoop.hdfs.protocol.TestLocatedBlock
Running org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.933 sec - in 
org.apache.hadoop.hdfs.protocol.TestBlockListAsLongs
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.272 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure190
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.459 sec - in 
org.apache.hadoop.hdfs.TestReadStripedFileWithMissingBlocks
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.506 sec - in 
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.574 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.356 sec - in 
org.apache.hadoop.hdf

Hadoop-Hdfs-trunk - Build # 3133 - Still Failing

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3133/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8391 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:22 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:27 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.139 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:32 h
[INFO] Finished at: 2016-05-12T18:04:38+00:00
[INFO] Final Memory: 72M/801M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
8 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncAPI

Error Message:
test timed out after 6 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.DataStreamer.waitForAckedSeqno(DataStreamer.java:768)
at 
org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:697)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:778)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:430)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncAPI(TestAsyncDFSRename.java:328)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncAPI(TestAsyncDFSRename.java:289)


FAILED:  
org.apache.hadoop.hdfs.TestAsyncD

Hadoop-Hdfs-trunk-Java8 - Build # 1199 - Still Failing

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1199/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6980 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:09 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:03 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.078 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:07 h
[INFO] Finished at: 2016-05-12T17:38:13+00:00
[INFO] Final Memory: 59M/771M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncAPI

Error Message:
test timed out after 6 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2472)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.waitActive(MiniDFSCluster.java:2512)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNodes(MiniDFSCluster.java:1977)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncAPI(TestAsyncDFSRename.java:395)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncAPI(TestAsyncDFSRename.java:289)




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1199

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13116 Jets3tNativeS3FileSystemContractTest does not run.

--
[...truncated 6787 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.829 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.503 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 117.038 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.746 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.881 sec - in 
org.apache.hadoop.hdfs.TestDisableConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 162.091 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteReadStripedFile
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.73 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 80.27 sec - in 
org.apache.hadoop.hdfs.TestParallelUnixDomainRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.227 sec - in 
org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.869 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.667 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.732 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.468 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.48 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.15 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.474 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.469 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFail

Re: [DISCUSS] Set minimum version of Hadoop 3 to JDK8 (HADOOP-11858)

2016-05-12 Thread Masatake Iwasaki

+1

Masatake

On 5/12/16 13:11, Gangumalla, Uma wrote:

+1

Regards,
Uma

On 5/10/16, 2:24 PM, "Andrew Wang"  wrote:


+1

On Tue, May 10, 2016 at 12:36 PM, Ravi Prakash 
wrote:


+1. Thanks for driving this Akira

On Tue, May 10, 2016 at 10:25 AM, Tsuyoshi Ozawa 
wrote:


Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in

trunk.

Sounds good. To do so, we need to check the blockers of 3.0.0-alpha
RC, especially upgrading all dependencies which use refractions at
first.

Thanks,
- Tsuyoshi

On Tue, May 10, 2016 at 8:32 AM, Akira AJISAKA
 wrote:

Hi developers,

Before cutting 3.0.0-alpha RC, I'd like to drop JDK7 support in

trunk.

Given this is a critical change, I'm thinking we should get the

consensus

first.

One concern I think is, when the minimum version is set to JDK8, we

need

to

configure Jenkins to disable multi JDK test only in trunk.

Any thoughts?

Thanks,
Akira



-

To unsubscribe, e-mail: yarn-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: yarn-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: mapreduce-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org




-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10394) move declaration of okhttp version from hdfs-client to hadoop-project POM

2016-05-12 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-10394:
-

 Summary: move declaration of okhttp version from hdfs-client to 
hadoop-project POM
 Key: HDFS-10394
 URL: https://issues.apache.org/jira/browse/HDFS-10394
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


The POM dependency on okhttp in hadoop-hdfs-client declares its version in that 
POM instead.

the root declaration, including version, must go into the 
hadoop-project/pom.xml so that its easy to track use and have only one place if 
this version were ever to be incremented. As it stands, if any other module 
picked up the library, they could adopt a different version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Hadoop-Hdfs-trunk - Build # 3132 - Still Failing

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3132/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5912 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:05 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:05 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.101 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:10 h
[INFO] Finished at: 2016-05-12T15:34:35+00:00
[INFO] Final Memory: 60M/900M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testPageRounder

Error Message:
Timed out waiting for condition. Thread diagnostics:
Timestamp: 2016-05-12 02:42:08,190

"VolumeScannerThread(/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/test/data/2/dfs/data/data2)"
 daemon prio=5 tid=100 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
at 
org.apache.hadoop.hdfs.server.datanode.VolumeScanner.run(VolumeScanner.java:613)
"IPC Server handler 4 on 32944" daemon prio=5 tid=46 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2082)
at 
java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:467)
at 
org.apache.hadoop.ipc.CallQueueManager.take(CallQueueManager.java:218)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2387)
"org.apache.hadoop.hdfs.server.namenode.LeaseManager$Monitor@749c2306" daemon 
prio=5 tid=55 timed_waiting
java.lang.Thread.State: TIMED_WAITING
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.server.namenode.LeaseManager$Mo

Build failed in Jenkins: Hadoop-Hdfs-trunk #3132

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13122 Customize User-Agent header sent in HTTP requests by S3A.

--
[...truncated 5719 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.724 sec - in 
org.apache.hadoop.hdfs.TestSetrepDecreasing
Running org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.965 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.582 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.497 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.751 sec - in 
org.apache.hadoop.hdfs.TestRead
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 101.474 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Running org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.542 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.737 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.011 sec - in 
org.apache.hadoop.hdfs.TestApplyingStoragePolicy
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.541 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.249 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.179 sec - in 
org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.099 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.656 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.205 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.574 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.16 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.TestGenericRefresh
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.751 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.tracing.TestTracing
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.633 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.048 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.044 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.security.TestPermission
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.68 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.69 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.064 sec - 
in org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.317 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.474 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Running org.a

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #1198

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-13122 Customize User-Agent header sent in HTTP requests by S3A.

--
[...truncated 5831 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.577 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.375 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.473 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 21.847 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.437 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.136 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.15 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.013 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.829 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.975 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.782 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.474 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.415 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.878 sec - 
in org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsDisable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.932 sec - in 
org.apache.hadoop.fs.TestSymlinkHdfsDisable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 6, Failu

Hadoop-Hdfs-trunk-Java8 - Build # 1198 - Failure

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/1198/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6024 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:28 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:08 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.107 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:12 h
[INFO] Finished at: 2016-05-12T15:32:44+00:00
[INFO] Final Memory: 59M/1055M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage

Error Message:
Cannot obtain block length for 
LocatedBlock{BP-1216550329-67.195.81.150-1463066841171:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:47137,DS-3ceaeb16-f7d5-4dfb-985f-a4da1585f81a,DISK]]}

Stack Trace:
java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-1216550329-67.195.81.150-1463066841171:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:47137,DS-3ceaeb16-f7d5-4dfb-985f-a4da1585f81a,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:435)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:278)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:267)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1038)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1003)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:178)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(Test

Build failed in Jenkins: Hadoop-Hdfs-trunk #3131

2016-05-12 Thread Apache Jenkins Server
See 

Changes:

[rohithsharmaks] YARN-5068. Expose scheduler queue to application master. 
(Harish

--
[...truncated 8535 lines...]
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:631)
at 
org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:547)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164)
at 
org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48)
at java.io.DataOutputStream.write(DataOutputStream.java:88)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.write(TestDFSStripedOutputStreamWithFailure.java:441)
... 13 more

at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:327)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:527)
at 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test1(TestDFSStripedOutputStreamWithFailure.java:531)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)

Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 35.186 sec - in 
org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.585 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.825 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestParallelReadUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.081 sec - in 
org.apache.hadoop.hdfs.TestParallelReadUtil
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.7 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.044 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.905 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.37 sec - in 
org.apache.hadoop.hdfs.TestKeyProviderCache
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.655 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.939 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Tests run: 43, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.114 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 134.063 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.864 sec - in 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate
Running org.apache.hadoop.hdfs.TestHDFSServerPorts
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.724 sec - in 
org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped:

Hadoop-Hdfs-trunk - Build # 3131 - Failure

2016-05-12 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/3131/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8728 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.5:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:06 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:17 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.116 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:22 h
[INFO] Finished at: 2016-05-12T12:47:25+00:00
[INFO] Final Memory: 71M/900M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
12 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncAPI

Error Message:
test timed out after 6 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:825)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:784)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:755)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:430)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:379)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:372)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:365)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.internalTestConcurrentAsyncAPI(TestAsyncDFSRename.java:328)
at 
org.apache.hadoop.hdfs.TestAsyncDFSRename.testAggressiveConcurrentAsyncAPI(TestAsyncDFSRename.java:289)


FAILED:  
org.apache.hadoop.hdfs.TestAsyncDFSRename.testConservativeConcurrentAsyncAPI

Error Message:
Cannot remove data directory

Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #1197

2016-05-12 Thread Apache Jenkins Server
See 


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10393) Unable to append to a SequenceFile with Compression.NONE.

2016-05-12 Thread JIRA
Gervais Mickaël created HDFS-10393:
--

 Summary: Unable to append to a SequenceFile with Compression.NONE.
 Key: HDFS-10393
 URL: https://issues.apache.org/jira/browse/HDFS-10393
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.2
Reporter: Gervais Mickaël
Priority: Critical


Hi,

I'm trying to use the append functionnality to an existing _SequenceFile_.

If I set _Compression.NONE_, it works when the file is created, but when the 
file already exists I've a _NullPointerException_, by the way it works if I 
specify a compression with a codec.

{code:title=Failing code|borderStyle=solid}
Option compression = compression(CompressionType.NONE);
Option keyClass = keyClass(LongWritable.class);
Option valueClass = valueClass(BytesWritable.class);
Option out = file(dfs);
Option append = appendIfExists(true);

writer = createWriter(conf,
 out,
 append,
 compression,
 keyClass,
 valueClass);
{code}

The following exeception is thrown when the file exists because compression 
option is checked:

{code}
Exception in thread "main" java.lang.NullPointerException
at 
org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1119)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:273)
{code}

This is due to the *codec* which is _null_:

{code:title=SequenceFile.java|borderStyle=solid}
 if (readerCompressionOption.value != compressionTypeOption.value
|| !readerCompressionOption.codec.getClass().getName()
.equals(compressionTypeOption.codec.getClass().getName())) {
  throw new IllegalArgumentException(
  "Compression option provided does not match the file");
}
{code}


Thansk 

Mickaël



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org