Re: [DISCUSS] About the details of JDK-8 support

2015-10-14 Thread Steve Loughran

> On 13 Oct 2015, at 17:32, Haohui Mai  wrote:
> 
> Just to echo Steve's idea -- if we're seriously considering supporting
> JDK 8, maybe the first thing to do is to set up the jenkins to run
> with JDK 8? I'm happy to help. Does anyone know who I can talk to if I
> need to play around with all the Jenkins knob?

Jenkins is building with JAva 7 and 8. all that's needed is to turn off the 
Java 7 build, which I will  happily do. The POM can be changed to set the 
minimum JVM version -though that's most likely to be visible to people building 
locally, as you'll need to make sure that you have access to java 7 and java 8 
JVMs if you want to build and test for both.

Jenkins-wise, the big issue is one I've mentioned before: the builds are 
failing an not enough people are caring

https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/488/

Please, lets fix this



[jira] [Created] (HDFS-9241) HDFS clients can't construct HdfsConfiguration instances

2015-10-14 Thread Steve Loughran (JIRA)
Steve Loughran created HDFS-9241:


 Summary: HDFS clients can't construct HdfsConfiguration instances
 Key: HDFS-9241
 URL: https://issues.apache.org/jira/browse/HDFS-9241
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Steve Loughran


the changes for the hdfs client classpath make instantiating 
{{HdfsConfiguration}} from the client impossible; it only lives server side. 
This breaks any app which creates one.

I know people will look at the {{@Private}} tag and say "don't do that then", 
but it's worth considering precisely why I, at least, do this: it's the only 
way to guarantee that the hdfs-default and hdfs-site resources get on the 
classpath, including all the security settings. It's precisely the use case 
which {{HdfsConfigurationLoader.init();}} offers internally to the hdfs code.

What am I meant to do now? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 495 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/495/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7609 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:37 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:48 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.055 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:52 h
[INFO] Finished at: 2015-10-14T17:37:01+00:00
[INFO] Final Memory: 55M/417M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-4250
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
11 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes

Error Message:
Unexpected num storage ids expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)


FAILED:  
org.apache.hadoop.hdfs.server.mover.TestStorageMover.testMigrateFileToArchival

Error Message:
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/util/IntrusiveCollection$IntrusiveIterator
at 
org.apache.hadoop.util.IntrusiveCollection.iterator(IntrusiveCollection.java:213)
at 
org.apache.hadoop.util.IntrusiveCollection.clear(IntrusiveCollection.java:368)
at 
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.clearPendingCachingCommands(DatanodeManager.java:1570)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1218)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.close(FSNamesystem.java:1578)

hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread Steve Loughran
just an FYI, the split off of hadoop hdfs into client and server is going to 
break things.

I know that, as my code is broken; DFSConfigKeys off the path, 
HdfsConfiguration, the class I've been loading to force pickup of hdfs-site.xml 
-all missing.

This is because hadoop-client  POM now depends on hadoop-hdfs-client, not 
hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad about 
DfsConfigKeys, as everybody uses it as the one hard-coded resource of HDFS 
constants, HDFS-6566 covering the issue of making this public, something that's 
been sitting around for a year.

I'm fixing my build by explicitly adding a hadoop-hdfs dependency.

Any application which used stuff which has now been declared server-side isn't 
going to compile any more, which does appear to break the compatibility 
guidelines we've adopted, specifically "The hadoop-client artifact (maven 
groupId:artifactId) stays compatible within a major release"

http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts


We need to do one of

1. agree that this change, is considered acceptable according to policy, and 
mark it as incompatible in hdfs/CHANGES.TXT
2. Change the POMs to add both hdfs-client and -hdfs server in hadoop-client 
-with downstream users free to exclude the server code

We unintentionally caused similar grief with the move of the s3n clients to 
hadoop-aws , HADOOP-11074 -something we should have picked up and -1'd. This 
time we know the problems going to arise, so lets explicitly make a decision 
this time, and share it with our users.

-steve


[jira] [Created] (HDFS-9242) Fix Findbugs warning from webhdfs.DataNodeUGIProvider.ugiCache

2015-10-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9242:


 Summary: Fix Findbugs warning from 
webhdfs.DataNodeUGIProvider.ugiCache 
 Key: HDFS-9242
 URL: https://issues.apache.org/jira/browse/HDFS-9242
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Xiaoyu Yao


This was introduced by HDFS-8855 and pre-patch warning can be found at 
https://builds.apache.org/job/PreCommit-HDFS-Build/12975/artifact/patchprocess/trunkFindbugsWarningshadoop-hdfs.html#DC_DOUBLECHECK

{code}
CodeWarning
DC  Possible doublecheck on 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache 
in new 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
 Configuration)
Bug type DC_DOUBLECHECK (click for details) 
In class org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider
In method new 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider(ParameterParser,
 Configuration)
On field 
org.apache.hadoop.hdfs.server.datanode.web.webhdfs.DataNodeUGIProvider.ugiCache
At DataNodeUGIProvider.java:[lines 49-51]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9243) TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout

2015-10-14 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9243:
-

 Summary: 
TestUnderReplicatedBlocks#testSetrepIncWithUnderReplicatedBlocks test timeout
 Key: HDFS-9243
 URL: https://issues.apache.org/jira/browse/HDFS-9243
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: HDFS
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


This is happening on trunk 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
 
On my local Linux machine, this test case times out 6 out of 10 times. When it 
does not time out, this test takes about 20 seconds, otherwise it takes more 
than 60 seconds and then time out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9240) Use Builder pattern for BlockLocation constructors

2015-10-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9240:


 Summary: Use Builder pattern for BlockLocation constructors
 Key: HDFS-9240
 URL: https://issues.apache.org/jira/browse/HDFS-9240
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor


This JIRA is opened to refactor the 8 telescoping constructors of BlockLocation 
class with Builder pattern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 493 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/493/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7715 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:10 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:28 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.055 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:32 h
[INFO] Finished at: 2015-10-14T09:55:38+00:00
[INFO] Final Memory: 90M/893M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter7799822169074355094.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire2945080707522579482tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_1795577474033700674022tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-1172
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancer

Error Message:
File /tmp.txt could only be replicated to 0 nodes instead of minReplication 
(=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this 
operation.
 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:298)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2448)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:730)
 at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #493

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-1172. Blocks in newly completed files are considered

--
[...truncated 7522 lines...]
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.013 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestFsck
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.677 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestLeaseManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.526 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestNameNodeXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestPathComponents
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.23 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestPathComponents
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 47.26 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.41 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestTruncateQuotaUpdate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestStripedINodeFile
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.635 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestStripedINodeFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.852 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSDirectory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.559 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFileLimit
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestMetaSave
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.445 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestMetaSave
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.203 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestEditLogAutoroll
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.62 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemMBean
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Tests run: 38, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 133.053 sec - 
in org.apache.hadoop.hdfs.server.namenode.TestCheckpoint
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeHttpServer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.265 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeHttpServer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.36 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNameNodeResourcePolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.921 sec - in 

Hadoop-Hdfs-trunk - Build # 2433 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2433/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7121 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:25 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:09 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.070 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:13 h
[INFO] Finished at: 2015-10-14T20:12:57+00:00
[INFO] Final Memory: 57M/580M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-12436
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
15 tests failed.
FAILED:  org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket

Error Message:
error parsing regexp: Unclosed group at pos 10: `myuser}{bc`

Stack Trace:
com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group at 
pos 10: `myuser}{bc`
at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
at 
org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
at 
org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)


FAILED:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2433

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-12436. GlobPattern regex library has performance issues with

--
[...truncated 6928 lines...]
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.193 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.501 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.516 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.259 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.717 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.071 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.452 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.787 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.168 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.021 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.358 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.597 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 12.808 sec - 
in org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Running org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.519 sec - in 
org.apache.hadoop.fs.loadGenerator.TestLoadGenerator
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.994 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRename
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.251 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractDelete
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.354 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractAppend
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.691 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractOpen
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.213 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractGetFileStatus
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.902 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractConcat
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.984 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractMkdir
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.97 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractCreate
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.864 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractRootDirectory
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.167 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSeek
Running org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.913 sec - in 
org.apache.hadoop.fs.contract.hdfs.TestHDFSContractSetTimes
Running org.apache.hadoop.fs.TestEnhancedByteBufferAccess
Tests run: 10, 

[jira] [Created] (HDFS-9244) Support nested encryption zones

2015-10-14 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-9244:


 Summary: Support nested encryption zones
 Key: HDFS-9244
 URL: https://issues.apache.org/jira/browse/HDFS-9244
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Xiaoyu Yao


This JIRA is opened to track adding support of nested encryption zone based on 
[~andrew.wang]'s [comment 
|https://issues.apache.org/jira/browse/HDFS-8747?focusedCommentId=14654141=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14654141]
 for certain use cases.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-9210) Fix some misuse of %n in VolumeScanner#printStats

2015-10-14 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao reopened HDFS-9210:
--

> Fix some misuse of %n in VolumeScanner#printStats
> -
>
> Key: HDFS-9210
> URL: https://issues.apache.org/jira/browse/HDFS-9210
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HDFS-9210.00.patch
>
>
> Found 2 extra "%n" in the VolumeScanner report and lines not well formatted  
> below. This JIRA is opened to fix the format issue.
> {code}
> Block scanner information for volume DS-93fb2503-de00-4f98-a8bc-c2bc13b8f0f7 
> with base path /hadoop/hdfs/data%nBytes verified in last hour   : 
> 136882014
> Blocks scanned in current period  :   
>   5
> Blocks scanned since restart  :   
>   5
> Block pool scans since restart:   
>   0
> Block scan errors since restart   :   
>   0
> Hours until next block pool scan  :   
> 476.000
> Last block scanned: 
> BP-1792969149-192.168.70.101-1444150984999:blk_1073742088_1274
> More blocks to scan in period :   
>   false
> %n
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread Mingliang Liu
The jira tracking this issue is: https://issues.apache.org/jira/browse/HDFS-9241

 +1 on option 2

I think it makes sense to make hadoop-client directly depend on hadoop-hdfs 
(which itself depends on hadoop-hdfs-client).

Ciao,

Mingliang Liu
Member of Technical Staff - HDFS,
Hortonworks Inc.
m...@hortonworks.com



> On Oct 14, 2015, at 10:36 AM, Steve Loughran  wrote:
> 
> just an FYI, the split off of hadoop hdfs into client and server is going to 
> break things.
> 
> I know that, as my code is broken; DFSConfigKeys off the path, 
> HdfsConfiguration, the class I've been loading to force pickup of 
> hdfs-site.xml -all missing.
> 
> This is because hadoop-client  POM now depends on hadoop-hdfs-client, not 
> hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad 
> about DfsConfigKeys, as everybody uses it as the one hard-coded resource of 
> HDFS constants, HDFS-6566 covering the issue of making this public, something 
> that's been sitting around for a year.
> 
> I'm fixing my build by explicitly adding a hadoop-hdfs dependency.
> 
> Any application which used stuff which has now been declared server-side 
> isn't going to compile any more, which does appear to break the compatibility 
> guidelines we've adopted, specifically "The hadoop-client artifact (maven 
> groupId:artifactId) stays compatible within a major release"
> 
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts
> 
> 
> We need to do one of
> 
> 1. agree that this change, is considered acceptable according to policy, and 
> mark it as incompatible in hdfs/CHANGES.TXT
> 2. Change the POMs to add both hdfs-client and -hdfs server in hadoop-client 
> -with downstream users free to exclude the server code
> 
> We unintentionally caused similar grief with the move of the s3n clients to 
> hadoop-aws , HADOOP-11074 -something we should have picked up and -1'd. This 
> time we know the problems going to arise, so lets explicitly make a decision 
> this time, and share it with our users.
> 
> -steve



Re: [DISCUSS] About the details of JDK-8 support

2015-10-14 Thread Robert Kanter
The only problem with trying to get the JDK 8 trunk builds green (or blue I
guess) is that it's like trying to hit a moving target because of how many
new commits keep coming in.  I was looking at fixing these a while ago, and
managed to at least make them compile and fixed (or worked with others to
fix) some of the unit tests.  I've been really busy on other tasks and
haven't had time to continue working on this in quite a while though.

Currently, it looks like Common is still green mostly, Yarn is having a
build failure with checkstyle, MR has between 1 and 10 test failures, and
HDFS had between 3 and 10 test failures.

I think it's going to be difficult to get these green, and to keep them
green, unless we get more buy in from everyone on new commits being tested
against JDK 8.  Otherwise, it's too hard to keep up with the number of
commits coming in, even if we do get it green.  Perhaps we could have
test-patch also run the patch against JDK 8?


- Robert

On Wed, Oct 14, 2015 at 8:27 AM, Steve Loughran 
wrote:

>
> > On 13 Oct 2015, at 17:32, Haohui Mai  wrote:
> >
> > Just to echo Steve's idea -- if we're seriously considering supporting
> > JDK 8, maybe the first thing to do is to set up the jenkins to run
> > with JDK 8? I'm happy to help. Does anyone know who I can talk to if I
> > need to play around with all the Jenkins knob?
>
> Jenkins is building with JAva 7 and 8. all that's needed is to turn off
> the Java 7 build, which I will  happily do. The POM can be changed to set
> the minimum JVM version -though that's most likely to be visible to people
> building locally, as you'll need to make sure that you have access to java
> 7 and java 8 JVMs if you want to build and test for both.
>
> Jenkins-wise, the big issue is one I've mentioned before: the builds are
> failing an not enough people are caring
>
>
> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/488/
>
> Please, lets fix this
>
>


Hadoop-Hdfs-trunk - Build # 2431 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2431/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6931 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:26 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:10 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.078 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:14 h
[INFO] Finished at: 2015-10-14T10:36:34+00:00
[INFO] Final Memory: 55M/686M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-1172
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage

Error Message:
Cannot obtain block length for 
LocatedBlock{BP-271488-67.195.81.150-1444818415959:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:52814,DS-9c3d5e13-1c8c-4878-8b7e-a61ca8f8f7b7,DISK]]}

Stack Trace:
java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-271488-67.195.81.150-1444818415959:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:52814,DS-9c3d5e13-1c8c-4878-8b7e-a61ca8f8f7b7,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2431

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-1172. Blocks in newly completed files are considered

--
[...truncated 6738 lines...]
Running org.apache.hadoop.hdfs.TestLocalDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.281 sec - in 
org.apache.hadoop.hdfs.TestLocalDFS
Running org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.527 sec - in 
org.apache.hadoop.hdfs.TestBlocksScheduledCounter
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.678 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 17, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 135.146 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.032 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.TestMissingBlocksAlert
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.902 sec - in 
org.apache.hadoop.hdfs.TestMissingBlocksAlert
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 21.924 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
testUpgradeFromRel1BBWImage(org.apache.hadoop.hdfs.TestDFSUpgradeFromImage)  
Time elapsed: 11.673 sec  <<< ERROR!
java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-271488-67.195.81.150-1444818415959:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:52814,DS-9c3d5e13-1c8c-4878-8b7e-a61ca8f8f7b7,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:275)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyFileSystem(TestDFSUpgradeFromImage.java:228)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.upgradeAndVerify(TestDFSUpgradeFromImage.java:600)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage(TestDFSUpgradeFromImage.java:622)

Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.761 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestBalancerBandwidth
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.565 sec - in 
org.apache.hadoop.hdfs.TestBalancerBandwidth
Running org.apache.hadoop.hdfs.TestSetTimes
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.415 sec - in 
org.apache.hadoop.hdfs.TestSetTimes
Running org.apache.hadoop.TestGenericRefresh
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.453 sec - in 
org.apache.hadoop.TestGenericRefresh
Running org.apache.hadoop.tracing.TestTracing
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.526 sec - in 
org.apache.hadoop.tracing.TestTracing
Running org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.621 sec - in 
org.apache.hadoop.tracing.TestTracingShortCircuitLocalRead
Running org.apache.hadoop.tracing.TestTraceAdmin
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.108 sec - in 
org.apache.hadoop.tracing.TestTraceAdmin
Running org.apache.hadoop.security.TestPermission
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.917 sec - in 
org.apache.hadoop.security.TestPermission
Running org.apache.hadoop.security.TestPermissionSymlinks
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.397 sec - in 
org.apache.hadoop.security.TestPermissionSymlinks
Running org.apache.hadoop.security.TestRefreshUserMappings
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.356 sec - in 
org.apache.hadoop.security.TestRefreshUserMappings
Running org.apache.hadoop.fs.TestFcHdfsSetUMask
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.17 sec - in 
org.apache.hadoop.fs.TestFcHdfsSetUMask
Running org.apache.hadoop.fs.TestSymlinkHdfsFileSystem
Tests run: 74, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 10.497 sec - 
in 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2434

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-12364. Deleting pid file after stop is causing the daemons to

[aw] fix CHANGES.txt name typo

[stevel] HADOOP-12478. Shell.getWinUtilsPath() has been renamed

[lei] HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid

--
[...truncated 6521 lines...]
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1207)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1740)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:887)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1938)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1989)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1970)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:494)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:433)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:170)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.doImmediateShutdown(NameNode.java:1709)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1744)
at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:887)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1938)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1989)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1970)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:494)
at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart(TestLeaseRecovery2.java:433)

testLeaseRecoverByAnotherUser(org.apache.hadoop.hdfs.TestLeaseRecovery2)  Time 
elapsed: 0 sec  <<< ERROR!
java.lang.IllegalStateException: Lease monitor is not running
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:145)
at 

Hadoop-Hdfs-trunk - Build # 2434 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2434/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6714 lines...]

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:07 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:13 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.068 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:17 h
[INFO] Finished at: 2015-10-14T23:11:00+00:00
[INFO] Final Memory: 70M/884M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter156013183083185896.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire6795706767581367156tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2235428557335086727282tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-12364
Updating HDFS-9238
Updating HADOOP-12478
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
8 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockLongHoldingReport

Error Message:
org.apache.hadoop.test.GenericTestUtils$LogCapturer.clearOutput()V

Stack Trace:
java.lang.NoSuchMethodError: 
org.apache.hadoop.test.GenericTestUtils$LogCapturer.clearOutput()V
at 
org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem.testFSLockLongHoldingReport(TestFSNamesystem.java:298)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestSecurityTokenEditLog.testEditLog

Error Message:
org/apache/hadoop/security/authentication/server/AuthenticationFilter

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/security/authentication/server/AuthenticationFilter
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at 

[jira] [Created] (HDFS-9246) TestGlobPaths#pTestCurlyBracket is failing

2015-10-14 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-9246:
--

 Summary: TestGlobPaths#pTestCurlyBracket is failing
 Key: HDFS-9246
 URL: https://issues.apache.org/jira/browse/HDFS-9246
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


com.google.re2j.PatternSyntaxException: error parsing regexp: Unclosed group at 
pos 10: `myuser}{bc`
at org.apache.hadoop.fs.GlobPattern.error(GlobPattern.java:168)
at org.apache.hadoop.fs.GlobPattern.set(GlobPattern.java:154)
at org.apache.hadoop.fs.GlobPattern.(GlobPattern.java:42)
at org.apache.hadoop.fs.GlobFilter.init(GlobFilter.java:67)
at org.apache.hadoop.fs.GlobFilter.(GlobFilter.java:50)
at org.apache.hadoop.fs.Globber.doGlob(Globber.java:209)
at org.apache.hadoop.fs.Globber.glob(Globber.java:148)
at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1664)
at 
org.apache.hadoop.fs.TestGlobPaths.prepareTesting(TestGlobPaths.java:758)
at 
org.apache.hadoop.fs.TestGlobPaths.pTestCurlyBracket(TestGlobPaths.java:724)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] About the details of JDK-8 support

2015-10-14 Thread Allen Wittenauer

If people want, I could setup a cut off of yetus master to run the jenkins 
test-patch.  (multiple maven repos, docker support, multijdk support, … ) Yetus 
would get some real world testing out of it and hadoop common-dev could stop 
spinning in circles over some of the same issues month after month. ;)


On Oct 14, 2015, at 3:05 PM, Robert Kanter  wrote:

> The only problem with trying to get the JDK 8 trunk builds green (or blue I
> guess) is that it's like trying to hit a moving target because of how many
> new commits keep coming in.  I was looking at fixing these a while ago, and
> managed to at least make them compile and fixed (or worked with others to
> fix) some of the unit tests.  I've been really busy on other tasks and
> haven't had time to continue working on this in quite a while though.
> 
> Currently, it looks like Common is still green mostly, Yarn is having a
> build failure with checkstyle, MR has between 1 and 10 test failures, and
> HDFS had between 3 and 10 test failures.
> 
> I think it's going to be difficult to get these green, and to keep them
> green, unless we get more buy in from everyone on new commits being tested
> against JDK 8.  Otherwise, it's too hard to keep up with the number of
> commits coming in, even if we do get it green.  Perhaps we could have
> test-patch also run the patch against JDK 8?
> 
> 
> - Robert
> 
> On Wed, Oct 14, 2015 at 8:27 AM, Steve Loughran 
> wrote:
> 
>> 
>>> On 13 Oct 2015, at 17:32, Haohui Mai  wrote:
>>> 
>>> Just to echo Steve's idea -- if we're seriously considering supporting
>>> JDK 8, maybe the first thing to do is to set up the jenkins to run
>>> with JDK 8? I'm happy to help. Does anyone know who I can talk to if I
>>> need to play around with all the Jenkins knob?
>> 
>> Jenkins is building with JAva 7 and 8. all that's needed is to turn off
>> the Java 7 build, which I will  happily do. The POM can be changed to set
>> the minimum JVM version -though that's most likely to be visible to people
>> building locally, as you'll need to make sure that you have access to java
>> 7 and java 8 JVMs if you want to build and test for both.
>> 
>> Jenkins-wise, the big issue is one I've mentioned before: the builds are
>> failing an not enough people are caring
>> 
>> 
>> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/488/
>> 
>> Please, lets fix this
>> 
>> 



[jira] [Created] (HDFS-9245) Fix findbugs warnings in hdfs-nfs/WriteCtx

2015-10-14 Thread Mingliang Liu (JIRA)
Mingliang Liu created HDFS-9245:
---

 Summary: Fix findbugs warnings in hdfs-nfs/WriteCtx
 Key: HDFS-9245
 URL: https://issues.apache.org/jira/browse/HDFS-9245
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Mingliang Liu
Assignee: Mingliang Liu


There is findbugs warnings as follows, which were brought by [HDFS-9092].

It seems fine to ignore them by write a filter rule in the 
{{findbugsExcludeFile.xml}} file. 

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.offset; locked 75% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx

{code}

and

{code:xml}

Inconsistent synchronization

Inconsistent synchronization of 
org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount; locked 50% of time



At WriteCtx.java:[lines 40-314]

In class org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx



In WriteCtx.java


Field org.apache.hadoop.hdfs.nfs.nfs3.WriteCtx.originalCount


{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 494 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/494/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7673 lines...]
main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:17 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:15 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.064 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:19 h
[INFO] Finished at: 2015-10-14T13:39:25+00:00
[INFO] Final Memory: 55M/472M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-4255
Updating YARN-4253
Updating YARN-4252
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
8 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryAfterNameNodeRestart2

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1316)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1207)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1740)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:887)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1938)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1989)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1970)
 at 
org.apache.hadoop.hdfs.TestLeaseRecovery2.hardLeaseRecoveryRestartHelper(TestLeaseRecovery2.java:494)
 at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #494

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[vvasudev] YARN-4253. Standardize on using PrivilegedOperationExecutor for all

[vvasudev] YARN-4252. Log container-executor invocation details when exit code 
is

[vvasudev] YARN-4255. container-executor does not clean up docker operation 
command

--
[...truncated 7480 lines...]
Running org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.595 sec - in 
org.apache.hadoop.fs.TestHDFSFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestHdfsTextCommand
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.707 sec - in 
org.apache.hadoop.fs.shell.TestHdfsTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.82 sec - in 
org.apache.hadoop.fs.TestSWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestResolveHdfsSymlink
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.138 sec - in 
org.apache.hadoop.fs.TestResolveHdfsSymlink
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.775 sec - 
in org.apache.hadoop.fs.TestWebHdfsFileContextMainOperations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.814 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.014 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsFileStatusHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.819 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.761 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.726 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Tests run: 58, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.08 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsHdfs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.806 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithAcls
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.212 sec - 
in org.apache.hadoop.fs.viewfs.TestViewFileSystemAtHdfsRoot
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.8 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFileSystemWithXAttrs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.821 sec - in 
org.apache.hadoop.fs.viewfs.TestViewFsDefaultValue
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestSymlinkHdfsFileContext
Tests run: 71, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.576 sec - in 

Hadoop-Hdfs-trunk-Java8 - Build # 497 - Failure

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/497/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 15688 lines...]
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:35 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:17 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.047 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:20 h
[INFO] Finished at: 2015-10-15T02:32:55+00:00
[INFO] Final Memory: 55M/600M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-12364
Updating HDFS-9238
Updating HDFS-9210
Updating HADOOP-12478
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
13 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes.testBalancer

Error Message:
File /tmp.txt could only be replicated to 0 nodes instead of minReplication 
(=1).  There are 1 datanode(s) running and 1 node(s) are excluded in this 
operation.
 at 
org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
 at 
org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:298)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2448)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:730)
 at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:500)
 at 
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:637)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:976)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2276)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2272)
 at java.security.AccessController.doPrivileged(Native Method)
 at 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2432

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[vvasudev] YARN-4253. Standardize on using PrivilegedOperationExecutor for all

[vvasudev] YARN-4252. Log container-executor invocation details when exit code 
is

[vvasudev] YARN-4255. container-executor does not clean up docker operation 
command

[rohithsharmaks] YARN-4250. NPE in AppSchedulingInfo#isRequestLabelChanged. 
(Brahma Reddy

--
[...truncated 6685 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.502 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.255 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.296 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.439 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.786 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Running org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.461 sec - in 
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.446 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.639 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.601 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.081 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.213 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.217 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.08 sec - in 
org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.554 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.311 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.744 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.043 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.103 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.672 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.771 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.033 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 149.566 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.808 sec - 
in org.apache.hadoop.hdfs.TestBlockReaderLocal
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.555 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 

Hadoop-Hdfs-trunk - Build # 2432 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2432/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6878 lines...]
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [04:00 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:17 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.078 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:21 h
[INFO] Finished at: 2015-10-14T14:12:56+00:00
[INFO] Final Memory: 55M/724M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating YARN-4250
Updating YARN-4255
Updating YARN-4253
Updating YARN-4252
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount

Error Message:
Timeout: excess replica count not equal to 2 for block blk_1073741825_1001 
after 2 msec.  Last counts: live = 2, excess = 0, corrupt = 0

Stack Trace:
java.util.concurrent.TimeoutException: Timeout: excess replica count not equal 
to 2 for block blk_1073741825_1001 after 2 msec.  Last counts: live = 2, 
excess = 0, corrupt = 0
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:152)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.checkTimeout(TestNodeCount.java:146)
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestNodeCount.testNodeCount(TestNodeCount.java:130)




Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #498

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-9223. Code cleanup for DatanodeDescriptor and HeartbeatManager.

--
[...truncated 6724 lines...]
Running org.apache.hadoop.hdfs.TestDisableConnCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.725 sec - in 
org.apache.hadoop.hdfs.TestDisableConnCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLease
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.606 sec - in 
org.apache.hadoop.hdfs.TestLease
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.167 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientBlockVerification
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.092 sec - in 
org.apache.hadoop.hdfs.TestClientBlockVerification
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.092 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.542 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationToken
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.551 sec - in 
org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.391 sec - in 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.234 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.539 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestKeyProviderCache
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.489 sec - in 
org.apache.hadoop.hdfs.TestKeyProviderCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestExternalBlockReader
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.841 sec - in 
org.apache.hadoop.hdfs.TestExternalBlockReader
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.79 sec - in 
org.apache.hadoop.hdfs.TestDatanodeStartupFixesLegacyStorageIDs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.352 sec - in 
org.apache.hadoop.hdfs.TestPeerCache
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCreationClient
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 60.76 sec - in 
org.apache.hadoop.hdfs.TestFileCreationClient
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.479 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Hadoop-Hdfs-trunk-Java8 - Build # 498 - Still Failing

2015-10-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/498/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6917 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:36 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:18 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.058 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:22 h
[INFO] Finished at: 2015-10-15T05:44:52+00:00
[INFO] Final Memory: 77M/860M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx2048m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter4708836608893904204.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire2368143156563814600tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2166188528831649029600tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-9223
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
2 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-1
  target length: 50
  current item: 37
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
 directory: /test-1
 target length: 50
 current item: 37
 done: false
] expected:<0> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at