See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2354/

###################################################################################
########################## LAST 60 LINES OF THE CONSOLE 
###########################
[...truncated 6842 lines...]
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:18 min]
[INFO] Apache Hadoop HDFS ................................ FAILURE [  01:15 h]
[INFO] Apache Hadoop HttpFS .............................. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED
[INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [  0.086 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 01:18 h
[INFO] Finished at: 2015-09-24T23:15:12+00:00
[INFO] Final Memory: 67M/955M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter6196989397777503253.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire7480556189950191691tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_756190784086620623518tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2342
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4452948 bytes
Compression is 0.0%
Took 3.9 sec
Recording test results
Updating HADOOP-12437
Updating HADOOP-8436
Updating HADOOP-12252
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###################################################################################
############################## FAILED TESTS (if any) 
##############################
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr.testPersistXAttr

Error Message:
org/apache/hadoop/io/retry/AtMostOnce

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/io/retry/AtMostOnce
        at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:111)
        at com.sun.proxy.$Proxy22.create(Unknown Source)
        at 
org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:241)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1227)
        at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1158)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:420)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:416)
        at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:416)
        at 
org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:359)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:780)
        at 
org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr.testXAttr(TestFSImageWithXAttr.java:71)
        at 
org.apache.hadoop.hdfs.server.namenode.TestFSImageWithXAttr.testPersistXAttr(TestFSImageWithXAttr.java:106)


REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites

Error Message:
Some writers didn't complete in expected runtime! Current writer 
state:[Circular Writer:
  directory: /test-1
  target length: 50
  current item: 50
  done: false
] expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Some writers didn't complete in expected runtime! 
Current writer state:[Circular Writer:
         directory: /test-1
         target length: 50
         current item: 50
         done: false
] expected:<0> but was:<1>
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:743)
        at org.junit.Assert.assertEquals(Assert.java:118)
        at org.junit.Assert.assertEquals(Assert.java:555)
        at 
org.apache.hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes.testCircularLinkedListWrites(TestSeveralNameNodes.java:90)


FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount

Error Message:
expected:<20.0> but was:<21.0>

Stack Trace:
java.lang.AssertionError: expected:<20.0> but was:<21.0>
        at org.junit.Assert.fail(Assert.java:88)
        at org.junit.Assert.failNotEquals(Assert.java:743)
        at org.junit.Assert.assertEquals(Assert.java:494)
        at org.junit.Assert.assertEquals(Assert.java:592)
        at 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.checkClusterHealth(TestNamenodeCapacityReport.java:322)
        at 
org.apache.hadoop.hdfs.server.namenode.TestNamenodeCapacityReport.testXceiverCount(TestNamenodeCapacityReport.java:282)


Reply via email to