Build failed in Jenkins: Hadoop-Hdfs-trunk #2793

2016-02-02 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12757. Findbug compilation fails for 'Kafka Library support'.

--
[...truncated 6481 lines...]
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.461 sec - in 
org.apache.hadoop.hdfs.qjournal.server.TestJournalNodeMXBean
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.978 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManagerUnit
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.267 sec - 
in org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager
Running org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.421 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestSegmentRecoveryComparator
Running org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.691 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestIPCLoggerChannel
Running org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.76 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestEpochsAreUnique
Running org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 315.024 sec - 
in org.apache.hadoop.hdfs.qjournal.client.TestQJMWithFaults
Running org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.493 sec - in 
org.apache.hadoop.hdfs.qjournal.client.TestQuorumCall
Running org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.92 sec - in 
org.apache.hadoop.hdfs.qjournal.TestMiniJournalCluster
Running org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.674 sec - in 
org.apache.hadoop.hdfs.qjournal.TestNNWithQJM
Running org.apache.hadoop.hdfs.TestConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.389 sec - in 
org.apache.hadoop.hdfs.TestConnCache
Running org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 123.204 sec - 
in org.apache.hadoop.hdfs.TestDFSStorageStateRecovery
Running org.apache.hadoop.hdfs.TestDFSInputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.074 sec - in 
org.apache.hadoop.hdfs.TestDFSInputStream
Running org.apache.hadoop.hdfs.TestFileAppend
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.062 sec - 
in org.apache.hadoop.hdfs.TestFileAppend
Running org.apache.hadoop.hdfs.TestFileAppend3
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.145 sec - 
in org.apache.hadoop.hdfs.TestFileAppend3
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.401 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 235.633 sec - 
in org.apache.hadoop.hdfs.TestReadStripedFileWithDecoding
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.676 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadNoChecksum
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 116.304 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.508 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestDFSClientSocketSize
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.703 sec - in 
org.apache.hadoop.hdfs.TestDFSClientSocketSize
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.023 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 124.483 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure130
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.03 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.243 sec - in 
org.apache.hadoop.hdfs.TestErasureCodeBenchmarkThroughput
Running 

[jira] [Created] (HDFS-9741) libhdfs++: GetLastError not returning meaningful messages after some failures

2016-02-02 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-9741:


 Summary: libhdfs++: GetLastError not returning meaningful messages 
after some failures
 Key: HDFS-9741
 URL: https://issues.apache.org/jira/browse/HDFS-9741
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


After failing to open a file, the text for GetLastErrorMessage is not being 
set.  It should be.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #859

2016-02-02 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12757. Findbug compilation fails for 'Kafka Library support'.

--
[...truncated 10414 lines...]
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-bkjournal ---
[INFO] 
[INFO] 
[INFO] Building Apache Hadoop HDFS-NFS 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-nfs ---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-nfs ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-hdfs-nfs ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hadoop-hdfs-nfs ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 17 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-hdfs-nfs ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-hdfs-nfs ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 13 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-hdfs-nfs ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.409 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestClientAccessPrivilege
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.288 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3Utils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3HttpServer
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.512 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestNfs3HttpServer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestWrites
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.562 sec - 
in org.apache.hadoop.hdfs.nfs.nfs3.TestWrites
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.754 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestExportsTable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.486 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestReaddir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.445 sec - in 
org.apache.hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.nfs.nfs3.TestDFSClientCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.266 sec - in 

Hadoop-Hdfs-trunk-Java8 - Build # 859 - Still Failing

2016-02-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/859/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10607 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:04 min]
[INFO] Apache Hadoop HDFS  SUCCESS [  03:55 h]
[INFO] Apache Hadoop HDFS Native Client .. SUCCESS [ 23.832 s]
[INFO] Apache Hadoop HttpFS .. FAILURE [04:32 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [06:13 min]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [01:37 min]
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.037 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:12 h
[INFO] Finished at: 2016-02-02T18:58:44+00:00
[INFO] Final Memory: 110M/1183M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:checkstyle (default-cli) 
on project hadoop-hdfs-httpfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to process configuration 
file at location: checkstyle/checkstyle.xml: Cannot create file-based 
resource:invalid code lengths set -> [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:checkstyle (default-cli) 
on project hadoop-hdfs-bkjournal: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to process configuration 
file at location: checkstyle/checkstyle.xml: Cannot create file-based 
resource:invalid code lengths set -> [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to process configuration 
file at location: checkstyle/checkstyle.xml: Cannot create file-based 
resource:invalid code lengths set -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-httpfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

Hadoop-Hdfs-trunk - Build # 2793 - Failure

2016-02-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2793/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6674 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [08:02 min]
[INFO] Apache Hadoop HDFS  FAILURE [  05:09 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.132 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 05:17 h
[INFO] Finished at: 2016-02-02T20:08:01+00:00
[INFO] Final Memory: 72M/850M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: 
java.lang.RuntimeException: java.io.IOException: Stream Closed -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
7 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage

Error Message:
Cannot obtain block length for 
LocatedBlock{BP-1016787823-67.195.81.153-1454438524682:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:59135,DS-1727576296-127.0.0.1-50010-1344495315902,DISK]]}

Stack Trace:
java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-1016787823-67.195.81.153-1454438524682:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:59135,DS-1727576296-127.0.0.1-50010-1344495315902,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:435)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:345)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:277)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:267)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1048)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1013)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:178)
at 

[jira] [Created] (HDFS-9742) TestAclsEndToEnd#testGoodWithWhitelistWithoutBlacklist occasionally fails in java8 trunk

2016-02-02 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9742:
-

 Summary: TestAclsEndToEnd#testGoodWithWhitelistWithoutBlacklist 
occasionally fails in java8 trunk
 Key: HDFS-9742
 URL: https://issues.apache.org/jira/browse/HDFS-9742
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Wei-Chiu Chuang


TestAclsEndToEnd#testGoodWithWhitelistWithoutBlacklist was added in HDFS-9295. 
It sometimes fail in java8 trunk branch with the following log:

https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/838/testReport/junit/org.apache.hadoop.hdfs/TestAclsEndToEnd/testGoodWithWhitelistWithoutBlacklist/

Error Message
{noformat}
Exception during deletion of file /tmp/BLUEZONE/file1 by keyadmin
{noformat}
Stacktrace
{noformat}
java.lang.AssertionError: Exception during deletion of file /tmp/BLUEZONE/file1 
by keyadmin
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.TestAclsEndToEnd.doFullAclTest(TestAclsEndToEnd.java:471)
at 
org.apache.hadoop.hdfs.TestAclsEndToEnd.testGoodWithWhitelistWithoutBlacklist(TestAclsEndToEnd.java:368)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Hadoop encryption module as Apache Chimera incubator project

2016-02-02 Thread Colin P. McCabe
It's great to see interest in improving this functionality.  I think
Chimera could be successful as an Apache project.  I don't have a
strong opinion one way or the other as to whether it belongs as part
of Hadoop or separate.

I do think there will be some challenges splitting this functionality
out into a separate jar, because of the way our CLASSPATH works right
now.  For example, let's say that Hadoop depends on Chimera 1.2 and
Spark depends on Chimera 1.1.  Now Spark jobs have two different
versions fighting it out on the classpath, similar to the situation
with Guava and other libraries.  Perhaps if Chimera adopts a policy of
strong backwards compatibility, we can just always use the latest jar,
but it still seems likely that there will be problems.  There are
various classpath isolation ideas that could help here, but they are
big projects in their own right and we don't have a clear timeline for
them.  If this does end up being a separate jar, we may need to shade
it to avoid all these issues.

Bundling the JNI glue code in the jar itself is an interesting idea,
which we have talked about before for libhadoop.so.  It doesn't really
have anything to do with the question of TLP vs. non-TLP, of course.
We could do that refactoring in Hadoop itself.  The really complicated
part of bundling JNI code in a jar is that you need to create jars for
every cross product of (JVM version, openssl version, operating
system).  For example, you have the RHEL6 build for openJDK7 using
openssl 1.0.1e.  If you change any one thing-- say, change openJDK7 to
Oracle JDK8, then you might need to rebuild.  And certainly using
Ubuntu would be a rebuild.  And so forth.  This kind of clashes with
Maven's philosophy of pulling prebuilt jars from the internet.

Kai Zheng's question about whether we would bundle openSSL's libraries
is a good one.  Given the high rate of new vulnerabilities discovered
in that library, it seems like bundling would require Hadoop users and
vendors to update very frequently, much more frequently than Hadoop is
traditionally updated.  So probably we would not choose to bundle
openssl.

best,
Colin

On Tue, Feb 2, 2016 at 12:29 AM, Chris Douglas  wrote:
> As a subproject of Hadoop, Chimera could maintain its own cadence.
> There's also no reason why it should maintain dependencies on other
> parts of Hadoop, if those are separable. How is this solution
> inadequate?
>
> If Chimera is not successful as an independent project or stalls,
> Hadoop and/or Spark and/or $project will have to reabsorb it as
> maintainers. Projects have high mortality in early life, and a fight
> over inheritance/maintenance is something we'd like to avoid. If, on
> the other hand, it develops enough of a community where it is
> obviously viable, then we can (and should) break it out as a TLP (as
> we have before). If other Apache projects take a dependency on
> Chimera, we're open to adding them to security@hadoop.
>
> Unlike Yetus, which was largely rewritten right before it was made
> into a TLP, security in Hadoop has a complicated pedigree. If Chimera
> eventually becomes a TLP, it seems fair to include those who work on
> it while it is a subproject. Declared upfront, that criterion is
> fairer than any post hoc justification, and will lead to a more
> accurate account of its community than a subset of the Hadoop
> PMC/committers that volunteer. -C
>
>
> On Mon, Feb 1, 2016 at 9:29 PM, Chen, Haifeng  wrote:
>> Thanks to all folks providing feedbacks and participating the discussions.
>>
>> @Owen, do you still have any concerns on going forward in the direction of 
>> Apache Commons (or other options, TLP)?
>>
>> Thanks,
>> Haifeng
>>
>> -Original Message-
>> From: Chen, Haifeng [mailto:haifeng.c...@intel.com]
>> Sent: Saturday, January 30, 2016 10:52 AM
>> To: hdfs-dev@hadoop.apache.org
>> Subject: RE: Hadoop encryption module as Apache Chimera incubator project
>>
 I believe encryption is becoming a core part of Hadoop. I think that
 moving core components out of Hadoop is bad from a project management 
 perspective.
>>
>>> Although it's certainly true that encryption capabilities (in HDFS, YARN, 
>>> etc.) are becoming core to Hadoop, I don't think that should really 
>>> influence whether or not the non-Hadoop-specific encryption routines should 
>>> be part of the Hadoop code base, or part of the code base of another 
>>> project that Hadoop depends on. If Chimera had existed as a library hosted 
>>> at ASF when HDFS encryption was first developed, HDFS probably would have 
>>> just added that as a dependency and been done with it. I don't think we 
>>> would've copy/pasted the code for Chimera into the Hadoop code base.
>>
>> Agree with ATM. I want to also make an additional clarification. I agree 
>> that the encryption capabilities are becoming core to Hadoop. While this 
>> effort is to put common and shared encryption routines such as crypto 

[jira] [Created] (HDFS-9744) TestDirectoryScanner#testThrottling occasionally time out after 300 seconds

2016-02-02 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9744:
-

 Summary: TestDirectoryScanner#testThrottling occasionally time out 
after 300 seconds
 Key: HDFS-9744
 URL: https://issues.apache.org/jira/browse/HDFS-9744
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
 Environment: Jenkins
Reporter: Wei-Chiu Chuang
Priority: Minor


I have seen quite a few test failures in TestDirectoryScanner#testThrottling.
https://builds.apache.org/job/Hadoop-Hdfs-trunk/2793/testReport/org.apache.hadoop.hdfs.server.datanode/TestDirectoryScanner/testThrottling/

Looking at the log, it does not look like the test got stucked. On my local 
machine, this test took 219 seconds. It is likely that this test takes more 
than 300 seconds to complete on a busy jenkins slave. I think it is reasonable 
to set a longer time out value, or reduce the number of blocks to reduce the 
duration of the test.

Error Message
{noformat}
test timed out after 30 milliseconds
{noformat}
Stacktrace
{noformat}
java.lang.Exception: test timed out after 30 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at 
org.apache.hadoop.hdfs.DataStreamer.waitAndQueuePacket(DataStreamer.java:804)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacket(DFSOutputStream.java:423)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacketFull(DFSOutputStream.java:432)
at 
org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:418)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:125)
at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:111)
at 
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:57)
at java.io.DataOutputStream.write(DataOutputStream.java:107)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:418)
at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:376)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.createFile(TestDirectoryScanner.java:108)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:584)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9743) Fix TestLazyPersistFiles#testFallbackToDiskFull in branch-2.7

2016-02-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-9743:


 Summary: Fix TestLazyPersistFiles#testFallbackToDiskFull in 
branch-2.7
 Key: HDFS-9743
 URL: https://issues.apache.org/jira/browse/HDFS-9743
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


The corresponding test case has been moved and fixed in trunk by HDFS-9073. We 
should fix it in branch-2.7 too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9172) Erasure Coding: Move DFSStripedIO stream related classes to hadoop-hdfs-client

2016-02-02 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-9172.
-
Resolution: Invalid

> Erasure Coding: Move DFSStripedIO stream related classes to hadoop-hdfs-client
> --
>
> Key: HDFS-9172
> URL: https://issues.apache.org/jira/browse/HDFS-9172
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Rakesh R
>Assignee: Zhe Zhang
>
> The idea of this jira is to move the striped stream related classes to 
> {{hadoop-hdfs-client}} project. This will help to be in sync with the 
> HDFS-6200 proposal.
> - DFSStripedInputStream
> - DFSStripedOutputStream
> - StripedDataStreamer



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9745) TestSecureNNWithQJM#testSecureMode sometimes fails with timeouts

2016-02-02 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-9745:
---

 Summary: TestSecureNNWithQJM#testSecureMode sometimes fails with 
timeouts
 Key: HDFS-9745
 URL: https://issues.apache.org/jira/browse/HDFS-9745
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiao Chen
Assignee: Xiao Chen
Priority: Minor


TestSecureNNWithQJM#testSecureMode fails intermittently. For most of the case, 
it timeouts.
In a 0.5%~1% probability, it fails with a more sophisticated error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2794

2016-02-02 Thread Apache Jenkins Server
See 

Changes:

[lei] HDFS-9721. Allow Delimited PB OIV tool to run upon fsimage that contains

[cmccabe] HDFS-9669. TcpPeerServer should respect ipc.server.listen.queue.size

[cmccabe] HDFS-9260. Improve the performance and GC friendliness of NameNode

[jlowe] MAPREDUCE-6621. Memory Leak in JobClient#submitJobInternal().

[wang] HADOOP-12755. Fix typo in defaultFS warning message.

--
[...truncated 8432 lines...]
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
RegularFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 
ZipFileIndexFileObject[
  [javadoc] [loading 

Hadoop-Hdfs-trunk - Build # 2794 - Still Failing

2016-02-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2794/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8625 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] Deleting 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:59 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:20 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.053 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:24 h
[INFO] Finished at: 2016-02-02T23:47:48+00:00
[INFO] Final Memory: 63M/711M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:checkstyle (default-cli) 
on project hadoop-hdfs: An error has occurred in Checkstyle report generation. 
Failed during checkstyle execution: Unable to find configuration file at 
location: checkstyle/checkstyle.xml: Could not find resource 
'checkstyle/checkstyle.xml'. -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed

[jira] [Created] (HDFS-9746) Some Kerberos related tests intermittently fails.

2016-02-02 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-9746:
---

 Summary: Some Kerberos related tests intermittently fails.
 Key: HDFS-9746
 URL: https://issues.apache.org/jira/browse/HDFS-9746
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Xiao Chen
Assignee: Xiao Chen


So far I've seen {{TestSecureNNWithQJM#testSecureMode}} and 
{{TestKMS#testACLs}} failing. More details coming in the 1st comment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9747) Reuse objectMapper instance in MapReduce

2016-02-02 Thread Lin Yiqun (JIRA)
Lin Yiqun created HDFS-9747:
---

 Summary: Reuse objectMapper instance in MapReduce
 Key: HDFS-9747
 URL: https://issues.apache.org/jira/browse/HDFS-9747
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: performance
Affects Versions: 2.7.1
Reporter: Lin Yiqun
Assignee: Lin Yiqun


Now in MapReduce, there are some places creating a new ObjectMapper instance 
every time. In wiki of ObjectMapper, it suggested:
{code}
Further: it is beneficial to use just one instance (or small number of 
instances) for data binding; many optimizations for reuse (of symbol tables, 
some buffers) depend on ObjectMapper instances being reused. 
{code}
http://webcache.googleusercontent.com/search?q=cache:kybMTIJC6F4J:wiki.fasterxml.com/JacksonFAQ+=4=ja=clnk=jp,
 it's similar to HDFS-9724.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Hdfs-trunk-Java8 #861

2016-02-02 Thread Apache Jenkins Server
See 



Re: Hadoop encryption module as Apache Chimera incubator project

2016-02-02 Thread Chris Douglas
As a subproject of Hadoop, Chimera could maintain its own cadence.
There's also no reason why it should maintain dependencies on other
parts of Hadoop, if those are separable. How is this solution
inadequate?

If Chimera is not successful as an independent project or stalls,
Hadoop and/or Spark and/or $project will have to reabsorb it as
maintainers. Projects have high mortality in early life, and a fight
over inheritance/maintenance is something we'd like to avoid. If, on
the other hand, it develops enough of a community where it is
obviously viable, then we can (and should) break it out as a TLP (as
we have before). If other Apache projects take a dependency on
Chimera, we're open to adding them to security@hadoop.

Unlike Yetus, which was largely rewritten right before it was made
into a TLP, security in Hadoop has a complicated pedigree. If Chimera
eventually becomes a TLP, it seems fair to include those who work on
it while it is a subproject. Declared upfront, that criterion is
fairer than any post hoc justification, and will lead to a more
accurate account of its community than a subset of the Hadoop
PMC/committers that volunteer. -C


On Mon, Feb 1, 2016 at 9:29 PM, Chen, Haifeng  wrote:
> Thanks to all folks providing feedbacks and participating the discussions.
>
> @Owen, do you still have any concerns on going forward in the direction of 
> Apache Commons (or other options, TLP)?
>
> Thanks,
> Haifeng
>
> -Original Message-
> From: Chen, Haifeng [mailto:haifeng.c...@intel.com]
> Sent: Saturday, January 30, 2016 10:52 AM
> To: hdfs-dev@hadoop.apache.org
> Subject: RE: Hadoop encryption module as Apache Chimera incubator project
>
>>> I believe encryption is becoming a core part of Hadoop. I think that
>>> moving core components out of Hadoop is bad from a project management 
>>> perspective.
>
>> Although it's certainly true that encryption capabilities (in HDFS, YARN, 
>> etc.) are becoming core to Hadoop, I don't think that should really 
>> influence whether or not the non-Hadoop-specific encryption routines should 
>> be part of the Hadoop code base, or part of the code base of another project 
>> that Hadoop depends on. If Chimera had existed as a library hosted at ASF 
>> when HDFS encryption was first developed, HDFS probably would have just 
>> added that as a dependency and been done with it. I don't think we would've 
>> copy/pasted the code for Chimera into the Hadoop code base.
>
> Agree with ATM. I want to also make an additional clarification. I agree that 
> the encryption capabilities are becoming core to Hadoop. While this effort is 
> to put common and shared encryption routines such as crypto stream 
> implementations into a scope which can be widely shared across the Apache 
> ecosystem. This doesn't move Hadoop encryption out of Hadoop (that is not 
> possible).
>
> Agree if we make it a separate and independent releases project in Hadoop 
> takes a step further than the existing approach and solve some issues (such 
> as libhadoop.so problem). Frankly speaking, I think it is not the best option 
> we can try. I also expect that an independent release project within Hadoop 
> core will also complicate the existing release ideology of Hadoop release.
>
> Thanks,
> Haifeng
>
> -Original Message-
> From: Aaron T. Myers [mailto:a...@cloudera.com]
> Sent: Friday, January 29, 2016 9:51 AM
> To: hdfs-dev@hadoop.apache.org
> Subject: Re: Hadoop encryption module as Apache Chimera incubator project
>
> On Wed, Jan 27, 2016 at 11:31 AM, Owen O'Malley  wrote:
>
>> I believe encryption is becoming a core part of Hadoop. I think that
>> moving core components out of Hadoop is bad from a project management 
>> perspective.
>>
>
> Although it's certainly true that encryption capabilities (in HDFS, YARN,
> etc.) are becoming core to Hadoop, I don't think that should really influence 
> whether or not the non-Hadoop-specific encryption routines should be part of 
> the Hadoop code base, or part of the code base of another project that Hadoop 
> depends on. If Chimera had existed as a library hosted at ASF when HDFS 
> encryption was first developed, HDFS probably would have just added that as a 
> dependency and been done with it. I don't think we would've copy/pasted the 
> code for Chimera into the Hadoop code base.
>
>
>> To put it another way, a bug in the encryption routines will likely
>> become a security problem that security@hadoop needs to hear about.
>>
> I don't think
>> adding a separate project in the middle of that communication chain is
>> a good idea. The same applies to data corruption problems, and so on...
>>
>
> Isn't the same true of all the libraries that Hadoop currently depends upon? 
> If the commons-httpclient library (or commons-codec, or commons-io, or guava, 
> or...) has a security vulnerability, we need to know about it so that we can 
> update our dependency to a fixed version. This case doesn't seem 

[jira] [Created] (HDFS-9737) libhdfs++: Create examples of consuming libhdfs++

2016-02-02 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-9737:


 Summary: libhdfs++: Create examples of consuming libhdfs++
 Key: HDFS-9737
 URL: https://issues.apache.org/jira/browse/HDFS-9737
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


We should have some small, useful examples of using libhdfs++.  I took the cat 
utility from HDFS-9272 and repackaged it as a stand-alone example that can be 
compiled and run independently of the rest of the libhdfs++ project.

It needs a little bit of work in using the current Options objects, but it 
shows how to compile and link to libhdfs++.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9738) libhdfs++: Implement simple authentication

2016-02-02 Thread Bob Hansen (JIRA)
Bob Hansen created HDFS-9738:


 Summary: libhdfs++: Implement simple authentication
 Key: HDFS-9738
 URL: https://issues.apache.org/jira/browse/HDFS-9738
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Bob Hansen


Support public APIs to pass in a username and accept authN and authZ responses 
from the server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9739) DatanodeStorage.isValidStorageId() is broken

2016-02-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-9739:


 Summary: DatanodeStorage.isValidStorageId() is broken
 Key: HDFS-9739
 URL: https://issues.apache.org/jira/browse/HDFS-9739
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical


After HDFS-8979, the check is returning true for the old storage ID format. So 
storage IDs in the old format  won't be updated during datanode upgrade. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9740) Use a reasonable limit in DFSTestUtil.waitForMetric()

2016-02-02 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-9740:


 Summary: Use a reasonable limit in DFSTestUtil.waitForMetric()
 Key: HDFS-9740
 URL: https://issues.apache.org/jira/browse/HDFS-9740
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


If test is detecting a bug, it will probably hit the long surefire timeout 
because the max is {{Integer.MAX_VALUE}}.  Use something more realistic. The 
default jmx update interval is 10 seconds, so something like 60 seconds should 
be more than enough.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Hdfs-trunk #2791

2016-02-02 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #858

2016-02-02 Thread Apache Jenkins Server
See 

Changes:

[devaraj] YARN-4100. Add Documentation for Distributed and Delegated-Centralized

[vinayakumarb] HDFS-9718. HAUtil#getConfForOtherNodes should unset independent 
generic

--
[...truncated 5802 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.026 sec - in 
org.apache.hadoop.hdfs.util.TestStripedBlockUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.202 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5FileUtils
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.374 sec - in 
org.apache.hadoop.hdfs.util.TestMD5FileUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightLinkedSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Tests run: 4, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.321 sec - in 
org.apache.hadoop.hdfs.util.TestAtomicFileOutputStream
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.836 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.843 sec - in 
org.apache.hadoop.hdfs.TestInjectionForSimulatedStorage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHFlush
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.512 sec - 
in org.apache.hadoop.hdfs.TestHFlush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestErasureCodingPolicies
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.448 sec - in 
org.apache.hadoop.hdfs.TestErasureCodingPolicies
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRemoteBlockReader
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.246 sec - in 
org.apache.hadoop.hdfs.TestRemoteBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.57 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.33 sec - in 
org.apache.hadoop.hdfs.TestDistributedFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.609 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeRollback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestRollingUpgrade
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 136.486 sec - 
in org.apache.hadoop.hdfs.TestRollingUpgrade
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.48 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.969 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption

Hadoop-Hdfs-trunk-Java8 - Build # 858 - Failure

2016-02-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/858/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 5995 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:59 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:02 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.069 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:06 h
[INFO] Finished at: 2016-02-02T12:00:26+00:00
[INFO] Final Memory: 56M/412M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at org.junit.Assert.fail(Assert.java:86)
at org.junit.Assert.assertTrue(Assert.java:41)
at org.junit.Assert.assertTrue(Assert.java:52)
at 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics.testDataNodeTimeSpend(TestDataNodeMetrics.java:289)




[VOTE] Release Apache Hadoop 2.6.4 RC0

2016-02-02 Thread Junping Du
Hi community folks,
   I've created a release candidate RC0 for Apache Hadoop 2.6.4 (the next 
maintenance release to follow up 2.6.3.) according to email thread of release 
plan 2.6.4 [1]. Below is details of this release candidate:

The RC is available for validation at:
*http://people.apache.org/~junping_du/hadoop-2.6.4-RC0/
*

The RC tag in git is: release-2.6.4-RC0

The maven artifacts are staged via repository.apache.org at:
*https://repository.apache.org/content/repositories/orgapachehadoop-1028/?
*

You can find my public key at:
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS

Please try the release and vote. The vote will run for the usual 5 days.

Thanks!


Cheers,

Junping


[1]: 2.6.4 release plan: http://markmail.org/message/fk3ud3c665lscvx5?



Build failed in Jenkins: Hadoop-Hdfs-trunk #2795

2016-02-02 Thread Apache Jenkins Server
See 

Changes:

[cmccabe] CHANGES.txt:  Move HDFS-9260 to trunk

[zhezhang] HDFS-9731. Erasure Coding: Rename BlockECRecoveryCommand to

[zhezhang] HDFS-9403. Erasure coding: some EC tests are missing timeout.

[rkanter] MAPREDUCE-6620. Jobs that did not start are shown as starting in 1969 
in

--
[...truncated 9129 lines...]
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-bkjournal 
---
[INFO] Deleting 

[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ 
hadoop-hdfs-bkjournal ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- hadoop-maven-plugins:3.0.0-SNAPSHOT:protoc (compile-protoc) @ 
hadoop-hdfs-bkjournal ---
[INFO] Wrote protoc checksums to file 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-hdfs-bkjournal ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] skip non existing resourceDirectory 

[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ 
hadoop-hdfs-bkjournal ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 7 source files to 

[WARNING] 
:
 

 uses unchecked or unsafe operations.
[WARNING] 
:
 Recompile with -Xlint:unchecked for details.
[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-hdfs-bkjournal ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 1 resource
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ 
hadoop-hdfs-bkjournal ---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 10 source files to 

[INFO] 
[INFO] --- maven-surefire-plugin:2.17:test (default-test) @ 
hadoop-hdfs-bkjournal ---
[INFO] Surefire report directory: 


---
 T E S T S
---

---
 T E S T S
---
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.301 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperConfiguration
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.736 sec - 
in org.apache.hadoop.contrib.bkjournal.TestBookKeeperJournalManager
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.348 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperSpeculativeRead
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 159.689 sec - 
in org.apache.hadoop.contrib.bkjournal.TestBookKeeperHACheckpoints
Running org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.254 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBookKeeperAsHASharedDir
Running org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.882 sec - in 
org.apache.hadoop.contrib.bkjournal.TestBootstrapStandbyWithBKJM
Running 

Hadoop-Hdfs-trunk - Build # 2795 - Still Failing

2016-02-02 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2795/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 9322 lines...]
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:53 min]
[INFO] Apache Hadoop HDFS  SUCCESS [  03:19 h]
[INFO] Apache Hadoop HDFS Native Client .. SUCCESS [ 22.311 s]
[INFO] Apache Hadoop HttpFS .. FAILURE [03:31 min]
[INFO] Apache Hadoop HDFS BookKeeper Journal . FAILURE [03:42 min]
[INFO] Apache Hadoop HDFS-NFS  FAILURE [01:33 min]
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.045 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:32 h
[INFO] Finished at: 2016-02-03T03:30:35+00:00
[INFO] Final Memory: 97M/1346M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
specific messages explaining why the rule failed. -> [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:checkstyle (default-cli) 
on project hadoop-hdfs-bkjournal: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location: checkstyle/checkstyle.xml: Could not find resource 
'checkstyle/checkstyle.xml'. -> [Help 1]
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-checkstyle-plugin:2.15:checkstyle (default-cli) 
on project hadoop-hdfs-nfs: An error has occurred in Checkstyle report 
generation. Failed during checkstyle execution: Unable to find configuration 
file at location: checkstyle/checkstyle.xml: Could not find resource 
'checkstyle/checkstyle.xml'. -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs-httpfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
All tests passed