Hadoop-Hdfs-trunk - Build # 775 - Still Failing
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/775/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 9948 lines...] Running org.apache.hadoop.hdfs.TestModTime Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.902 sec Running org.apache.hadoop.hdfs.TestBlockMissingException Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.834 sec Running org.apache.hadoop.hdfs.TestReplication Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.552 sec Running org.apache.hadoop.fs.TestFcHdfsCreateMkdir Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.337 sec Running org.apache.hadoop.hdfs.protocol.TestCorruptFileBlocks Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.07 sec Running org.apache.hadoop.hdfs.server.namenode.TestNNThroughputBenchmark Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.845 sec Running org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.567 sec Running org.apache.hadoop.fs.viewfs.TestViewFileSystemHdfs Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.316 sec Running org.apache.hadoop.cli.TestHDFSCLI Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 74.61 sec Results : Failed tests: Tests in error: Tests run: 838, Failures: 4, Errors: 1, Skipped: 0 [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS FAILURE [1:03:35.107s] [INFO] Apache Hadoop HDFS Project SKIPPED [INFO] [INFO] BUILD FAILURE [INFO] [INFO] Total time: 1:03:35.319s [INFO] Finished at: Mon Aug 29 12:40:40 UTC 2011 [INFO] Final Memory: 9M/114M [INFO] [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.6:test (default-test) on project hadoop-hdfs: There are test failures. [ERROR] [ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results. [ERROR] - [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException Build step 'Execute shell' marked build as failure [WARNINGS] Skipping publisher since build result is FAILURE Archiving artifacts Recording fingerprints Updating MAPREDUCE-2891 Updating MAPREDUCE-2898 Recording test results Publishing Javadoc Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## 5 tests failed. FAILED: org.apache.hadoop.hdfs.TestDfsOverAvroRpc.testWorkingDirectory Error Message: Two methods with same name: delete Stack Trace: org.apache.avro.AvroTypeException: Two methods with same name: delete at org.apache.avro.reflect.ReflectData.getProtocol(ReflectData.java:394) at org.apache.avro.ipc.reflect.ReflectResponder.init(ReflectResponder.java:36) at org.apache.hadoop.ipc.AvroRpcEngine.createResponder(AvroRpcEngine.java:189) at org.apache.hadoop.ipc.AvroRpcEngine$TunnelResponder.init(AvroRpcEngine.java:196) at org.apache.hadoop.ipc.AvroRpcEngine.getServer(AvroRpcEngine.java:232) at org.apache.hadoop.ipc.RPC.getServer(RPC.java:550) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:432) at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:567) at org.apache.hadoop.hdfs.server.namenode.NameNode.init(NameNode.java:559) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1546) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:637) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:541) at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:257) at org.apache.hadoop.hdfs.MiniDFSCluster.init(MiniDFSCluster.java:85) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:243) at org.apache.hadoop.hdfs.TestLocalDFS.testWorkingDirectory(TestLocalDFS.java:64) at
[jira] [Resolved] (HDFS-2295) Call to localhost/127.0.0.1:54310 failed on connection exception: Connection refused
[ https://issues.apache.org/jira/browse/HDFS-2295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eli Collins resolved HDFS-2295. --- Resolution: Invalid Sounds like a configuration issue (eg no DNS setup). Please use the hdfs-dev list for these types of issues. Call to localhost/127.0.0.1:54310 failed on connection exception: Connection refused Key: HDFS-2295 URL: https://issues.apache.org/jira/browse/HDFS-2295 Project: Hadoop HDFS Issue Type: Bug Components: data-node, hdfs client, name-node Affects Versions: 0.20.2 Environment: Ubuntu 11.04 64bit , hadoop 0.20.2 Reporter: patrick.J Labels: hadoop Original Estimate: 24h Remaining Estimate: 24h when I try it dm@master:/usr/local/hadoop-0.20.2$ bin/hadoop dfs -ls hdfs://localhost:54310/ It cast these exceptions 11/08/29 11:34:29 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 0 time(s). 11/08/29 11:34:30 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 1 time(s). 11/08/29 11:34:31 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 2 time(s). 11/08/29 11:34:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 3 time(s). 11/08/29 11:34:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 4 time(s). 11/08/29 11:34:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 5 time(s). 11/08/29 11:34:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 6 time(s). 11/08/29 11:34:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 7 time(s). 11/08/29 11:34:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 8 time(s). 11/08/29 11:34:38 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:54310. Already tried 9 time(s). ls: Call to localhost/127.0.0.1:54310 failed on connection exception: java.net.ConnectException: Connection refused Our namenode is listening to the 54310 port. namefs.default.name/name valuehdfs://master:54310/value -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2296) If read error while lease is being recovered, client reverts to stale view on block info
If read error while lease is being recovered, client reverts to stale view on block info Key: HDFS-2296 URL: https://issues.apache.org/jira/browse/HDFS-2296 Project: Hadoop HDFS Issue Type: Bug Components: hdfs client Affects Versions: 0.20-append, 0.22.0, 0.23.0 Reporter: stack Priority: Critical We are seeing the following issue around recoverLease over in hbaselandia. DFSClient calls recoverLease to assume ownership of a file. The recoverLease returns to the client but it can take time for the new state to propagate. Meantime, an incoming read fails though its using updated block info. Thereafter all read retries fail because on exception we revert to stale block view and we never recover. Laxman reports this issue in the below mailing thread: See this thread for first report of this issue: http://search-hadoop.com/m/S1mOHFRmgk2/%2527FW%253A+Handling+read+failures+during+recovery%2527subj=FW+Handling+read+failures+during+recovery Chatting w/ Hairong offline, she suggests this a general issue around lease recovery no matter how it triggered (new recoverLease or not). I marked this critical. At least over in hbase it is since we get set stuck here recovering a crashed server. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2297) FindBugs OutOfMemoryError
FindBugs OutOfMemoryError - Key: HDFS-2297 URL: https://issues.apache.org/jira/browse/HDFS-2297 Project: Hadoop HDFS Issue Type: Bug Components: build Affects Versions: 0.22.0 Environment: FindBugs 1.3.9, ant 1.8.2, RHEL6, Jenkins 1.414 in Tomcat 7.0.14, Sun Java HotSpot(TM) 64-Bit Server VM Reporter: Joep Rottinghuis Assignee: Joep Rottinghuis Priority: Blocker When running the findbugs target from Jenkins, I get an OutOfMemory error. The effort in FindBugs is set to Max which ends up using a lot of memory to go through all the classes. The jvmargs passed to FindBugs is hardcoded to 512 MB max. We can leave the default to 512M, as long as we pass this as an ant parameter which can be overwritten in individual cases through -D, or in the build.properties file (either basedir, or user's home directory). -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2298) TestDfsOverAvroRpc is failing on trunk
TestDfsOverAvroRpc is failing on trunk -- Key: HDFS-2298 URL: https://issues.apache.org/jira/browse/HDFS-2298 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Aaron T. Myers The relevant bit of the error: {noformat} --- Test set: org.apache.hadoop.hdfs.TestDfsOverAvroRpc --- Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.486 sec FAILURE! testWorkingDirectory(org.apache.hadoop.hdfs.TestDfsOverAvroRpc) Time elapsed: 1.424 sec ERROR! org.apache.avro.AvroTypeException: Two methods with same name: delete {noformat} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2299) TestOfflineEditsViewer is failing on trunk
TestOfflineEditsViewer is failing on trunk -- Key: HDFS-2299 URL: https://issues.apache.org/jira/browse/HDFS-2299 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 0.24.0 Reporter: Aaron T. Myers The relevant bit of the error: {noformat} --- Test set: org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer --- Tests run: 2, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 5.652 sec FAILURE! testStored(org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer) Time elapsed: 0.038 sec FAILURE! java.lang.AssertionError: Reference XML edits and parsed to XML should be same {noformat} -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HDFS-2300) TestFileAppend4 and TestMultiThreadedSync fail on 20.append
TestFileAppend4 and TestMultiThreadedSync fail on 20.append --- Key: HDFS-2300 URL: https://issues.apache.org/jira/browse/HDFS-2300 Project: Hadoop HDFS Issue Type: Bug Affects Versions: 0.20-append Reporter: Jitendra Nath Pandey Assignee: Jitendra Nath Pandey TestFileAppend4 and TestMultiThreadedSync fail on the 20.append branch. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
Hadoop-Hdfs-22-branch - Build # 79 - Still Failing
See https://builds.apache.org/job/Hadoop-Hdfs-22-branch/79/ ### ## LAST 60 LINES OF THE CONSOLE ### [...truncated 871 lines...] Buildfile: build.xml clean-contrib: clean: BUILD FAILED /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:1285: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build.xml:60: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/fuse-dfs/build.xml:22: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build-contrib.xml:68: Source resource does not exist: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/ivy/libraries.properties Total time: 0 seconds == == BUILD: ant clean tar mvn-deploy findbugs -Dtest.junit.output.format=xml -Dcompile.c++=true -Dcompile.native=true -Dfindbugs.home=$FINDBUGS_HOME -Dforrest.home=$FORREST_HOME -Dclover.home=$CLOVER_HOME -Declipse.home=$ECLIPSE_HOME == == Buildfile: build.xml clean-contrib: clean: BUILD FAILED /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/build.xml:1285: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build.xml:60: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/fuse-dfs/build.xml:22: The following error occurred while executing this line: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/build-contrib.xml:68: Source resource does not exist: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-22-branch/trunk/src/contrib/ivy/libraries.properties Total time: 0 seconds == == STORE: saving artifacts == == mv: cannot stat `build/*.tar.gz': No such file or directory mv: cannot stat `build/*.jar': No such file or directory mv: cannot stat `build/test/findbugs': No such file or directory mv: cannot stat `build/docs/api': No such file or directory Build Failed Build step 'Execute shell' marked build as failure [FINDBUGS] Skipping publisher since build result is FAILURE Archiving artifacts Publishing Clover coverage report... No Clover report will be published due to a Build Failure Recording test results Publishing Javadoc Recording fingerprints Email was triggered for: Failure Sending email for trigger: Failure ### ## FAILED TESTS (if any) ## No tests ran.