Hadoop-Hdfs-trunk-Java8 - Build # 893 - Still Failing

2016-02-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/893/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5621 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.81 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.226 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 64.427 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.839 sec - in 
org.apache.hadoop.hdfs.TestSmallBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Slave went offline during the build
ERROR: Connection was broken: java.io.IOException: Sorry, this connection is 
closed.
at 
com.trilead.ssh2.transport.TransportManager.ensureConnected(TransportManager.java:587)
at 
com.trilead.ssh2.transport.TransportManager.sendMessage(TransportManager.java:660)
at com.trilead.ssh2.channel.Channel.freeupWindow(Channel.java:407)
at com.trilead.ssh2.channel.Channel.freeupWindow(Channel.java:347)
at 
com.trilead.ssh2.channel.ChannelManager.getChannelData(ChannelManager.java:943)
at 
com.trilead.ssh2.channel.ChannelInputStream.read(ChannelInputStream.java:58)
at 
com.trilead.ssh2.channel.ChannelInputStream.read(ChannelInputStream.java:79)
at 
hudson.remoting.FlightRecorderInputStream.read(FlightRecorderInputStream.java:82)
at 
hudson.remoting.ChunkedInputStream.readHeader(ChunkedInputStream.java:72)
at 
hudson.remoting.ChunkedInputStream.readUntilBreak(ChunkedInputStream.java:103)
at 
hudson.remoting.ChunkedCommandTransport.readBlock(ChunkedCommandTransport.java:39)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)
Caused by: java.io.IOException: Error: the peer is not consuming our 
asynchronous replies.
at 
com.trilead.ssh2.transport.TransportManager.sendAsynchronousMessage(TransportManager.java:628)
at com.trilead.ssh2.channel.Channel.freeupWindow(Channel.java:405)
at com.trilead.ssh2.channel.Channel$Output.write(Channel.java:99)
at 
com.trilead.ssh2.channel.ChannelManager.msgChannelExtendedData(ChannelManager.java:848)
at 
com.trilead.ssh2.channel.ChannelManager.handleMessage(ChannelManager.java:1463)
at 
com.trilead.ssh2.transport.TransportManager.receiveLoop(TransportManager.java:796)
at 
com.trilead.ssh2.transport.TransportManager$1.run(TransportManager.java:489)
at java.lang.Thread.run(Thread.java:745)

Build step 'Execute shell' marked build as failure
ERROR: Publisher 'Archive the artifacts' failed: no workspace for 
Hadoop-Hdfs-trunk-Java8 #893
ERROR: Publisher 'Publish JUnit test result report' failed: no workspace for 
Hadoop-Hdfs-trunk-Java8 #893
ERROR: Build step failed with exception
java.lang.NullPointerException
at 
hudson.plugins.violations.ViolationsPublisher.perform(ViolationsPublisher.java:74)
at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:776)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:723)
at hudson.model.Build$BuildExecution.post2(Build.java:183)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:670)
at hudson.model.Run.execute(Run.java:1763)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Build step 'Report Violations' marked build as failure
ERROR: Publisher 'E-mail Notification' failed: no workspace for 
Hadoop-Hdfs-trunk-Java8 #893
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: H8 is offline; cannot locate jdk-1.8.0
ERROR: H8 is offline; cannot locate jdk-1.8.0




##

Hadoop-Hdfs-trunk - Build # 2823 - Still Failing

2016-02-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2823/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5886 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [06:05 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:49 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.083 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:55 h
[INFO] Finished at: 2016-02-11T12:04:31+00:00
[INFO] Final Memory: 55M/688M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling

Error Message:
test timed out after 30 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at 
org.apache.hadoop.hdfs.DataStreamer.waitAndQueuePacket(DataStreamer.java:804)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacket(DFSOutputStream.java:423)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacketFull(DFSOutputStream.java:432)
at 
org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:418)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:418)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:376)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.createFile(TestDirectoryScanner.java:108)
at 
org.apache.hadoop.hd

Build failed in Jenkins: Hadoop-Hdfs-trunk #2823

2016-02-11 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-9789. Correctly update DataNode's scheduled block size when writing

[vvasudev] YARN-4628. Display application priority in yarn top. Contributed by

[vvasudev] YARN-4655. Log uncaught exceptions/errors in various thread pools in

--
[...truncated 5693 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 177.48 sec - in 
org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.629 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 110.853 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.999 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.039 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.195 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.393 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.609 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.123 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.086 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.163 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.417 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.592 sec - in 
org.apache.hadoop.hdfs.TestLease
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 241.201 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.254 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 121.097 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestReconstructStripedFile
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 131.59 sec - 
in org.apache.hadoop.hdfs.TestReconstructStripedFile
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.723 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.236 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.866 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.851 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.048 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.237 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.458 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.858 sec - in 
org.apache.

[jira] [Created] (HDFS-9791) libhfds++: ConfigurationLoader throws parse_exception on invalid input

2016-02-11 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-9791:


 Summary: libhfds++: ConfigurationLoader throws parse_exception on 
invalid input
 Key: HDFS-9791
 URL: https://issues.apache.org/jira/browse/HDFS-9791
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


The coding style for libhdfs++ is to return Status objects rather than throwing 
exceptions.  The interface for ConfigurationLoader should switch out bools for 
Status objects, and ConfigurationLoader::UpdateMapWithBytes (and any other 
place we interact wtih rapidxml) should catch rapidxml exceptions, returning an 
appropriate Status message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9792) libhdfs++: EACCES not setting errno correctly

2016-02-11 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-9792:


 Summary: libhdfs++: EACCES not setting errno correctly
 Key: HDFS-9792
 URL: https://issues.apache.org/jira/browse/HDFS-9792
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen
Assignee: Bob Hansen


When libhdfs++ gets a permissions error, it is failing to initialize errnum.

Due to changes passing in the night, the code in hdfs.cc that reads
{code}
case Status::Code::kPermissionDenied:
  if (!stat.ToString().empty())
ReportError(EACCES, stat.ToString().c_str());
  else
ReportError(EACCES, "Permission denied");
  break;
{code}
should read
{code}
case Status::Code::kPermissionDenied:
  errnum = EACCES;
  default_message = "Permission denied";
  break;
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9793) upgrade or remove guava dependency

2016-02-11 Thread PJ Fanning (JIRA)
PJ Fanning created HDFS-9793:


 Summary: upgrade or remove guava dependency
 Key: HDFS-9793
 URL: https://issues.apache.org/jira/browse/HDFS-9793
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: build
Reporter: PJ Fanning


http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-hdfs/2.7.1 indicates 
a dependency on guava 11.0.2.
The StopWatch API changed in recent guava versions.
Could we remove the dependency or upgrade to guava 15 or 16 that as the old 
deprecated StopWatch constructor.
http://docs.guava-libraries.googlecode.com/git-history/v16.0/javadoc/index.html
This would be an development line only change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 894 - Still Failing

2016-02-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/894/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5902 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:06 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:19 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.052 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:23 h
[INFO] Finished at: 2016-02-11T22:16:15+00:00
[INFO] Final Memory: 56M/500M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout

Error Message:
write timedout too late in 1225 ms.

Stack Trace:
java.io.IOException: write timedout too late in 1225 ms.
at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.OutputStream.write(OutputStream.java:75)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1040)




Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #894

2016-02-11 Thread Apache Jenkins Server
See 

Changes:

[zhz] HDFS-9755. Erasure Coding: allow to use multiple EC policies in striping

[Arun Suresh] YARN-2575. Create separate ACLs for Reservation

--
[...truncated 5709 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.816 sec - in 
org.apache.hadoop.hdfs.TestGetFileChecksum
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLeaseRecovery2
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.654 sec - in 
org.apache.hadoop.hdfs.TestLeaseRecovery2
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.353 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.423 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.849 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestReservedRawPaths
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.425 sec - in 
org.apache.hadoop.hdfs.TestReservedRawPaths
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.767 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.822 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.41 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.18 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.598 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.418 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.195 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.83 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.614 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerWithStripedBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.721 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Java Hot

Build failed in Jenkins: Hadoop-Hdfs-trunk #2824

2016-02-11 Thread Apache Jenkins Server
See 

Changes:

[zhz] HDFS-9755. Erasure Coding: allow to use multiple EC policies in striping

[Arun Suresh] YARN-2575. Create separate ACLs for Reservation

--
[...truncated 5108 lines...]
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.314 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.315 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.529 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 63.886 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.425 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestSmallBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.254 sec - in 
org.apache.hadoop.hdfs.TestSmallBlock
Running org.apache.hadoop.hdfs.web.TestWebHDFS
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 119.994 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFS
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.686 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithMultipleNameNodes
Running org.apache.hadoop.hdfs.web.TestAuthFilter
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.761 sec - in 
org.apache.hadoop.hdfs.web.TestAuthFilter
Running org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.387 sec - 
in org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
Running org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.506 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTokens
Running org.apache.hadoop.hdfs.web.TestJsonUtil
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.634 sec - in 
org.apache.hadoop.hdfs.web.TestJsonUtil
Running org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.454 sec - in 
org.apache.hadoop.hdfs.web.TestFSMainOperationsWebHdfs
Running org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Tests run: 64, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 28.46 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSAcl
Running org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.327 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsUrl
Running org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.391 sec - in 
org.apache.hadoop.hdfs.web.TestWebHDFSForHA
Running org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.958 sec - in 
org.apache.hadoop.hdfs.web.TestHttpsFileSystem
Running org.apache.hadoop.hdfs.web.resources.TestParam
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.735 sec - in 
org.apache.hadoop.hdfs.web.resources.TestParam
Running org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.601 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsWithAuthenticationFilter
Running org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.819 sec - 
in org.apache.hadoop.hdfs.web.TestWebHDFSXAttr
Running org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.01 sec - in 
org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.164 sec - in 
org.apache.hadoop.hdfs.TestClientBlockVerification
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.668 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.877 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.net.TestNetworkTopology
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.385 sec - in 
org.apache.hadoop.net.TestNetworkTopology
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.37 sec - in 
org.apache.hadoop.security.TestPermiss

Hadoop-Hdfs-trunk - Build # 2824 - Still Failing

2016-02-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2824/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5301 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:17 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:16 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.097 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:20 h
[INFO] Finished at: 2016-02-11T22:19:31+00:00
[INFO] Final Memory: 57M/829M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestFileAppend3.org.apache.hadoop.hdfs.TestFileAppend3

Error Message:
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator
at 
org.apache.hadoop.util.IntrusiveCollection.iterator(IntrusiveCollection.java:213)
at 
org.apache.hadoop.util.IntrusiveCollection.clear(IntrusiveCollection.java:368)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.clearPendingCachingCommands(DatanodeManager.java:1580)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1186)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1531)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stopCommonServices(NameNode.java:774)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:953)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.stopAndJoinNameNode(MiniDFSCluster.java:1965)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1911)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1882)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1875)
at 
org.apache.hadoop.hdfs.TestFileAppend3.tearDown(TestFileAppend3.java:87)




[jira] [Created] (HDFS-9794) Streamer threads may leak if failure happens when closing the striped outputstream

2016-02-11 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-9794:
---

 Summary: Streamer threads may leak if failure happens when closing 
the striped outputstream
 Key: HDFS-9794
 URL: https://issues.apache.org/jira/browse/HDFS-9794
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Namit Maheshwari
Assignee: Jing Zhao
Priority: Critical


When closing the DFSStripedOutputStream, if failures happen while flushing out 
the data/parity blocks, the streamer threads will not be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9795) OIV Delimited should show which files are ACL-enabled.

2016-02-11 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HDFS-9795:
---

 Summary: OIV Delimited should show which files are ACL-enabled.
 Key: HDFS-9795
 URL: https://issues.apache.org/jira/browse/HDFS-9795
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.7.2
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Trivial


In {{hdfs oiv}} delimited output, there is no easy way to see whether a file 
has ACLs. 

{{FsShell}} shows a {{+}} in the permission.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9796) Add throttler for datanode bandwidth

2016-02-11 Thread Guocui Mi (JIRA)
Guocui Mi created HDFS-9796:
---

 Summary: Add throttler for datanode bandwidth
 Key: HDFS-9796
 URL: https://issues.apache.org/jira/browse/HDFS-9796
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode
Reporter: Guocui Mi
Priority: Minor


Add throttler for datanode bandwidth



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: Hadoop encryption module as Apache Chimera incubator project

2016-02-11 Thread Chen, Haifeng
Thanks all the folks participating this discussion and providing valuable 
suggestions and options.

I suggest we take it forward to make a proposal in Apache Commons community. 

Thanks,
Haifeng

-Original Message-
From: Chen, Haifeng [mailto:haifeng.c...@intel.com] 
Sent: Friday, February 5, 2016 10:06 AM
To: hdfs-dev@hadoop.apache.org; common-...@hadoop.apache.org
Subject: RE: Hadoop encryption module as Apache Chimera incubator project

> [Chirs] Yes, but even if the artifact is widely consumed, as a TLP it would 
> need to sustain a community. If the scope is too narrow, then it will quickly 
> fall into maintenance mode, its contributors will move on, and it will retire 
> to the attic. Alone, I doubt its viability as a TLP. So as a first option, 
> donating only this code to Apache Commons would accomplish some immediate 
> goals in a sustainable forum.
Totally agree. As a TLP it needs nice scope and roadmap to sustain a 
development community. 

Thanks,
Haifeng

-Original Message-
From: Chris Douglas [mailto:cdoug...@apache.org]
Sent: Friday, February 5, 2016 6:28 AM
To: common-...@hadoop.apache.org
Cc: hdfs-dev@hadoop.apache.org
Subject: Re: Hadoop encryption module as Apache Chimera incubator project

On Thu, Feb 4, 2016 at 12:06 PM, Gangumalla, Uma  
wrote:

> [UMA] Ok. Great. You are right. I have cc¹ed to hadoop common. (You 
> mean to cc Apache commons as well?)

I meant, if you start a discussion with Apache Commons, please CC 
common-dev@hadoop to coordinate.

> [UMA] Right now we plan to have encryption libraries are the only 
> one¹s we planned and as we see lot of interest from other projects 
> like spark to use them. I see some challenges when we bring lot of 
> code(other common
> codes) into this project is that, they all would have different 
> requirements and may be different expected timelines for release etc.
> Some projects may just wanted to use encryption interfaces alone but not all.
> As they are completely independent codes, may be better to scope out 
> clearly.

Yes, but even if the artifact is widely consumed, as a TLP it would need to 
sustain a community. If the scope is too narrow, then it will quickly fall into 
maintenance mode, its contributors will move on, and it will retire to the 
attic. Alone, I doubt its viability as a TLP. So as a first option, donating 
only this code to Apache Commons would accomplish some immediate goals in a 
sustainable forum.

APR has a similar scope. As a second option, that may also be a reasonable 
home, particularly if some of the native bits could integrate with APR.

If the scope is broader, the effort could sustain prolonged development. The 
current code is developing a strategy for packing native libraries on multiple 
platforms, a capability that, say, the native compression codecs (AFAIK) still 
lack. While java.nio is improving, many projects would benefit from a better, 
native interface to the filesystem (e.g., NativeIO). We could avoid duplicating 
effort and collaborate on a common library.

As a third option, Hadoop already implements some useful native libraries, 
which is why a subproject might be a sound course. That would enable the 
subproject to coordinate with Hadoop on migrating its native functionality to a 
separable, reusable component, then move to a TLP when we can rely on it 
exclusively (if it has a well-defined, independent community). It could control 
its release cadence and limit its dependencies.

Finally, this is beside the point if nobody is interested in doing the work on 
such a project. It's rude to pull code out of Hadoop and donate it to another 
project so Spark can avoid a dependency, but this instance seems reasonable to 
me. -C

[1] https://apr.apache.org/

> On 2/3/16, 6:46 PM, "Chen, Haifeng"  wrote:
>
>>Thanks Chris.
>>
 I went through the repository, and now understand the reasoning 
that would locate this code in Apache Commons. This isn't proposing 
to extract much of the implementation and it takes none of the 
integration. It's limited to interfaces to crypto libraries and 
streams/configuration.
>>Exactly.
>>
 Chimera would be a boutique TLP, unless we wanted to draw out more 
of the integration and tooling. Is that a goal you're interested in 
pursuing? There's a tension between keeping this focused and 
including enough functionality to make it viable as an independent 
component.
>>The Chimera goal was for providing useful, common and optimized 
>>cryptographic functionalities. I would prefer that it is still focused 
>>in this clear scope. Multiple domain requirements will put more 
>>challenges and uncertainties in where and how it should go, thus more 
>>risk in stalling.
>>
 If the encryption libraries are the only ones you're interested in 
pulling out, then Apache Commons does seem like a better target than 
a separate project.
>>Yes. Just mentioned above, the library will be positioned in 
>>cryptograp

Re: Hadoop encryption module as Apache Chimera incubator project

2016-02-11 Thread Gangumalla, Uma
Thanks Haifeng. I was just waiting if any more comments. If no objections
further, I would initiate a discussion thread in Apache Commons in a day
time and will also cc to hadoop common.

Regards,
Uma

On 2/11/16, 6:13 PM, "Chen, Haifeng"  wrote:

>Thanks all the folks participating this discussion and providing valuable
>suggestions and options.
>
>I suggest we take it forward to make a proposal in Apache Commons
>community. 
>
>Thanks,
>Haifeng
>
>-Original Message-
>From: Chen, Haifeng [mailto:haifeng.c...@intel.com]
>Sent: Friday, February 5, 2016 10:06 AM
>To: hdfs-dev@hadoop.apache.org; common-...@hadoop.apache.org
>Subject: RE: Hadoop encryption module as Apache Chimera incubator project
>
>> [Chirs] Yes, but even if the artifact is widely consumed, as a TLP it
>>would need to sustain a community. If the scope is too narrow, then it
>>will quickly fall into maintenance mode, its contributors will move on,
>>and it will retire to the attic. Alone, I doubt its viability as a TLP.
>>So as a first option, donating only this code to Apache Commons would
>>accomplish some immediate goals in a sustainable forum.
>Totally agree. As a TLP it needs nice scope and roadmap to sustain a
>development community.
>
>Thanks,
>Haifeng
>
>-Original Message-
>From: Chris Douglas [mailto:cdoug...@apache.org]
>Sent: Friday, February 5, 2016 6:28 AM
>To: common-...@hadoop.apache.org
>Cc: hdfs-dev@hadoop.apache.org
>Subject: Re: Hadoop encryption module as Apache Chimera incubator project
>
>On Thu, Feb 4, 2016 at 12:06 PM, Gangumalla, Uma
> wrote:
>
>> [UMA] Ok. Great. You are right. I have cc¹ed to hadoop common. (You
>> mean to cc Apache commons as well?)
>
>I meant, if you start a discussion with Apache Commons, please CC
>common-dev@hadoop to coordinate.
>
>> [UMA] Right now we plan to have encryption libraries are the only
>> one¹s we planned and as we see lot of interest from other projects
>> like spark to use them. I see some challenges when we bring lot of
>> code(other common
>> codes) into this project is that, they all would have different
>> requirements and may be different expected timelines for release etc.
>> Some projects may just wanted to use encryption interfaces alone but
>>not all.
>> As they are completely independent codes, may be better to scope out
>> clearly.
>
>Yes, but even if the artifact is widely consumed, as a TLP it would need
>to sustain a community. If the scope is too narrow, then it will quickly
>fall into maintenance mode, its contributors will move on, and it will
>retire to the attic. Alone, I doubt its viability as a TLP. So as a first
>option, donating only this code to Apache Commons would accomplish some
>immediate goals in a sustainable forum.
>
>APR has a similar scope. As a second option, that may also be a
>reasonable home, particularly if some of the native bits could integrate
>with APR.
>
>If the scope is broader, the effort could sustain prolonged development.
>The current code is developing a strategy for packing native libraries on
>multiple platforms, a capability that, say, the native compression codecs
>(AFAIK) still lack. While java.nio is improving, many projects would
>benefit from a better, native interface to the filesystem (e.g.,
>NativeIO). We could avoid duplicating effort and collaborate on a common
>library.
>
>As a third option, Hadoop already implements some useful native
>libraries, which is why a subproject might be a sound course. That would
>enable the subproject to coordinate with Hadoop on migrating its native
>functionality to a separable, reusable component, then move to a TLP when
>we can rely on it exclusively (if it has a well-defined, independent
>community). It could control its release cadence and limit its
>dependencies.
>
>Finally, this is beside the point if nobody is interested in doing the
>work on such a project. It's rude to pull code out of Hadoop and donate
>it to another project so Spark can avoid a dependency, but this instance
>seems reasonable to me. -C
>
>[1] https://apr.apache.org/
>
>> On 2/3/16, 6:46 PM, "Chen, Haifeng"  wrote:
>>
>>>Thanks Chris.
>>>
> I went through the repository, and now understand the reasoning
>that would locate this code in Apache Commons. This isn't proposing
>to extract much of the implementation and it takes none of the
>integration. It's limited to interfaces to crypto libraries and
>streams/configuration.
>>>Exactly.
>>>
> Chimera would be a boutique TLP, unless we wanted to draw out more
>of the integration and tooling. Is that a goal you're interested in
>pursuing? There's a tension between keeping this focused and
>including enough functionality to make it viable as an independent
>component.
>>>The Chimera goal was for providing useful, common and optimized
>>>cryptographic functionalities. I would prefer that it is still focused
>>>in this clear scope. Multiple domain requirements will put more
>>>challenges and uncertainties in wher

Re: [VOTE] Release Apache Hadoop 2.6.4 RC0

2016-02-11 Thread Junping Du
Thanks Yongjun and Allen for the feedback. I agree that option 2 could be a 
safer option if any concern on option 1. Will defer this change to 2.6.5.

Thanks,

Junping

From: Yongjun Zhang 
Sent: Wednesday, February 10, 2016 7:11 PM
To: hdfs-dev@hadoop.apache.org
Cc: Hadoop Common; mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.6.4 RC0

Thanks Junping and Allen.

It'd be nice to have HDFS-9629 but I'm ok with option 2, given the fact
that the issue is not critical (and will be addressed in all future
releases), and the concern Allen raised.

Best,

--Yongjun

On Wed, Feb 10, 2016 at 8:37 AM, Allen Wittenauer  wrote:

>
> > On Feb 9, 2016, at 6:27 PM, Junping Du  wrote:
> >
> > Thanks Yongjun for identifying and proposing this change to 2.6.4. I
> think this is the right thing to do and check for following releases. For
> 2.6.4, it seems unnecessary to create another release candidate for this
> issue as we only kicking off a new RC build when last RC has serious
> problem in functionality. The vote progress is quite smoothly so far, so it
> seems unlikely that we will create a new RC. However, I think there are
> still two options here:
> > Option 1:  in final build, adopt change of HDFS-9629 that only updates
> the footer of Web UI to show year 2016.
> > Option 2: skip HDFS-9629 for 2.6.4 and adopt it later for 2.6.5.
> > I prefer Option 1 as this is a very low risky change without affecting
> any functionality, and we allow non-functional changes (like release date,
> etc.) happen on final build after RC passed. I would like to hear the
> voices in community here before acting for the next step. Thoughts?
> >
>
> I’d think having PMC votes apply to what is not actually the final
> artifact is against the ASF rules.
>
>
>


[RESULT] [VOTE] Release Apache Hadoop 2.6.4 RC0

2016-02-11 Thread Junping Du
I give my binding +1 to conclude the vote for 2.6.4 RC0. With 14 +1s (7 
binding), and no -1s, the vote passes.

Thanks for everyone who tried the release candidate and voted. Also, especially 
thanks to Kihwal Lee, Jason Lowe, Sangjin Lee, Akira AJISAKA and all who help 
in backporting patches for 2.6.4.

I'll push the release bits and send out an announcement for 2.6.4 soon.

Cheers,

Junping

From: Tsuyoshi Ozawa 
Sent: Tuesday, February 09, 2016 11:22 PM
To: mapreduce-...@hadoop.apache.org
Cc: common-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.6.4 RC0

+1(binding)

- Verified signatures.
- Ran tests for Apache Tez with the artifacts. All tests passed.
- Ran local cluster and ran some examples on it.

- Tsuyoshi

On Wed, Feb 10, 2016 at 2:18 AM, Sunil Govind  wrote:
> +1 (non-binding) with one note,
>
> - Installed tar ball from source and brought up the cluster
> - Ran few MR jobs and those are working fine.
> - UI and REST apis are also looks fine. Ran few rest queries for this test.
> - Tested node label feature and works fine. (one observation during this
> testing)
>
> One note to add here. "-Dmapreduce.job.node-label-expression" support is
> not there 2.6.4. (MAPREDUCE-6304). So we cannot specify label to
> application while submitting.  I think this also can be ported to 2.6
> branch so it will help to specify labels at submission time.
>
> Thanks and Regards
> Sunil
>
> On Wed, Feb 3, 2016 at 12:31 PM Junping Du  wrote:
>
>> Hi community folks,
>>I've created a release candidate RC0 for Apache Hadoop 2.6.4 (the next
>> maintenance release to follow up 2.6.3.) according to email thread of
>> release plan 2.6.4 [1]. Below is details of this release candidate:
>>
>> The RC is available for validation at:
>> *http://people.apache.org/~junping_du/hadoop-2.6.4-RC0/
>> *
>>
>> The RC tag in git is: release-2.6.4-RC0
>>
>> The maven artifacts are staged via repository.apache.org at:
>> *https://repository.apache.org/content/repositories/orgapachehadoop-1028/?
>> > >*
>>
>> You can find my public key at:
>> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>>
>> Please try the release and vote. The vote will run for the usual 5 days.
>>
>> Thanks!
>>
>>
>> Cheers,
>>
>> Junping
>>
>>
>> [1]: 2.6.4 release plan: http://markmail.org/message/fk3ud3c665lscvx5?
>>
>>


[jira] [Created] (HDFS-9797) RequestHedgingProxyProvider is too verbose with Standby exceptions

2016-02-11 Thread Inigo Goiri (JIRA)
Inigo Goiri created HDFS-9797:
-

 Summary: RequestHedgingProxyProvider is too verbose with Standby 
exceptions
 Key: HDFS-9797
 URL: https://issues.apache.org/jira/browse/HDFS-9797
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Inigo Goiri
Assignee: Inigo Goiri
Priority: Minor


{{RequestHedgingProxyProvider}} tries to connect to all the Namenodes and 
reports the exception for all the standby exceptions for all the other 
namenodes. There is no point on reporting the standby exception if it's 
expected.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #895

2016-02-11 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-12795. KMS does not log detailed stack trace for unexpected

[wang] HADOOP-12699. TestKMS#testKMSProvider intermittently fails during 'test

--
[...truncated 6183 lines...]
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.49 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 14.71 sec - in 
org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.866 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.343 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.034 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.465 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.254 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.118 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.425 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.829 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.16 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.307 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.254 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.136 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 16.676 sec - 
in org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.permission.TestStickyBit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.752 sec - in 
org.apache.hadoop.fs.

Hadoop-Hdfs-trunk-Java8 - Build # 895 - Still Failing

2016-02-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/895/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6376 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:44 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:35 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.059 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:40 h
[INFO] Finished at: 2016-02-12T05:30:20+00:00
[INFO] Final Memory: 56M/578M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testVolumeIteratorWithCaching

Error Message:
test timed out after 6 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 6 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:812)
at 
org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:776)
at 
org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:747)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at 
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:427)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:376)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:369)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:362)
at 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner$TestContext.createFiles(TestBlockScanner.java:129)
at 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testVolumeIteratorImpl(TestBlockScanner.java:159)
at 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner.testVo

Hadoop-Hdfs-trunk - Build # 2825 - Still Failing

2016-02-11 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2825/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5303 lines...]
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [06:19 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:49 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.085 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:55 h
[INFO] Finished at: 2016-02-12T06:49:48+00:00
[INFO] Final Memory: 69M/595M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
5 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestFileAppend.testMultipleAppends

Error Message:
Failed to replace a bad datanode on the existing pipeline due to no more good 
datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:42638,DS-30e57a99-6a66-4df2-b725-040d10bf4731,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42758,DS-c4acbab4-c205-4bad-a877-ba7abcdf1391,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:42638,DS-30e57a99-6a66-4df2-b725-040d10bf4731,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42758,DS-c4acbab4-c205-4bad-a877-ba7abcdf1391,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.

Stack Trace:
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:42638,DS-30e57a99-6a66-4df2-b725-040d10bf4731,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42758,DS-c4acbab4-c205-4bad-a877-ba7abcdf1391,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:42638,DS-30e57a99-6a66-4df2-b725-040d10bf4731,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42758,DS-c4acbab4-c205-4bad-a877-ba7abcdf1391,DISK]]).
 The current failed datanode replacement policy is 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2825

2016-02-11 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-12795. KMS does not log detailed stack trace for unexpected

[wang] HADOOP-12699. TestKMS#testKMSProvider intermittently fails during 'test

--
[...truncated 5110 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 342.42 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.381 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.034 sec - in 
org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.488 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.602 sec - in 
org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 111.801 sec - 
in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.978 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 12, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 46.003 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestFileAppend
testMultipleAppends(org.apache.hadoop.hdfs.TestFileAppend)  Time elapsed: 7.351 
sec  <<< ERROR!
java.io.IOException: Failed to replace a bad datanode on the existing pipeline 
due to no more good datanodes being available to try. (Nodes: 
current=[DatanodeInfoWithStorage[127.0.0.1:42638,DS-30e57a99-6a66-4df2-b725-040d10bf4731,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42758,DS-c4acbab4-c205-4bad-a877-ba7abcdf1391,DISK]],
 
original=[DatanodeInfoWithStorage[127.0.0.1:42638,DS-30e57a99-6a66-4df2-b725-040d10bf4731,DISK],
 
DatanodeInfoWithStorage[127.0.0.1:42758,DS-c4acbab4-c205-4bad-a877-ba7abcdf1391,DISK]]).
 The current failed datanode replacement policy is DEFAULT, and a client may 
configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' 
in its configuration.
at 
org.apache.hadoop.hdfs.DataStreamer.findNewDatanode(DataStreamer.java:1161)
at 
org.apache.hadoop.hdfs.DataStreamer.addDatanode2ExistingPipeline(DataStreamer.java:1231)
at 
org.apache.hadoop.hdfs.DataStreamer.handleDatanodeReplacement(DataStreamer.java:1422)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineInternal(DataStreamer.java:1337)
at 
org.apache.hadoop.hdfs.DataStreamer.setupPipelineForAppendOrRecovery(DataStreamer.java:1320)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:598)

Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 105.033 sec - 
in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.458 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 214.372 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.967 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.032 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.511 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.741 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.632 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 139.445 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.483 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: