[jira] [Resolved] (HADOOP-9051) “ant test” will build failed for trying to delete a file
[ https://issues.apache.org/jira/browse/HADOOP-9051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Luke Lu resolved HADOOP-9051. - Resolution: Fixed Fix Version/s: 1.1.2 1.2.0 Committed trivial patch to branch-1* branches. Thanks for verifying the patch Junping. > “ant test” will build failed for trying to delete a file > - > > Key: HADOOP-9051 > URL: https://issues.apache.org/jira/browse/HADOOP-9051 > Project: Hadoop Common > Issue Type: Test > Components: test >Affects Versions: 1.0.4 > Environment: OS: Ubuntu 10.04; forrest version 0.8; findbugs version > 2.0.1; ant version 1.8.1 >Reporter: meng gong >Assignee: Luke Lu >Priority: Minor > Labels: test > Fix For: 1.2.0, 1.1.2 > > Attachments: fix-ant-test, hadoop-9051-v1.patch > > > Run "ant test" on branch-1 of hadoop-common. > When the test process reach "test-core-excluding-commit-and-smoke" > It will invoke the "macro-test-runner" to clear and rebuild the test > environment. > Then the ant task command > failed for trying to delete an non-existent file. > following is the test result logs: > test-core-excluding-commit-and-smoke: >[delete] Deleting: > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/testsfailed >[delete] Deleting directory > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/data > [mkdir] Created dir: > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/data >[delete] Deleting directory > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/logs > BUILD FAILED > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build.xml:1212: The > following error occurred while executing this line: > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build.xml:1166: The > following error occurred while executing this line: > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build.xml:1057: > Unable to delete file > /home/jdu/bdc/hadoop-topology-branch1-new/hadoop-common/build/test/logs/userlogs/job_20121112223129603_0001/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/attempt_20121112223129603_0001_r_00_0/stdout -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9145) Remove CRLF endings from files
Suresh Srinivas created HADOOP-9145: --- Summary: Remove CRLF endings from files Key: HADOOP-9145 URL: https://issues.apache.org/jira/browse/HADOOP-9145 Project: Hadoop Common Issue Type: Bug Components: scripts Affects Versions: trunk-win Reporter: Suresh Srinivas Attachments: HADOOP-9145.patch Few files committed HADOOP-8945 have CRLF line endings. This jira change CRLF to LF, to avoid git flagging it as changed files. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (HADOOP-9144) FindBugs reports new warnings in branch-trunk-win
Chris Nauroth created HADOOP-9144: - Summary: FindBugs reports new warnings in branch-trunk-win Key: HADOOP-9144 URL: https://issues.apache.org/jira/browse/HADOOP-9144 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: trunk-win Reporter: Chris Nauroth Assignee: Chris Nauroth Testing the merge from branch-trunk-win to trunk, we saw some new FindBugs warnings that need to be fixed. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
Re: making a hadoop-common test run if a property is set
One approach we've taken in the past is making the junit test skip itself when some precondition is not true. Then, we often create a property which people can use to cause the skipped tests to become a hard error. For example, all the tests that rely on libhadoop start with these lines: > @Test > public void myTest() { >Assume.assumeTrue(NativeCodeLoader.isNativeCodeLoaded()); > ... > } This causes them to be silently skipped when libhadoop.so is not available or loaded (perhaps because it hasn't been built.) However, if you want to cause this to be a hard error, you simply run > mvn test -Drequire.test.libhadoop See TestHdfsNativeCodeLoader.java to see how this is implemented. The main idea is that your Jenkins build slaves use all the -Drequire lines, but people running tests locally are not inconvenienced by the need to build libhadoop.so in every case. This is especially good because libhadoop.so isn't known to build on certain platforms like AIX, etc. It seems to be a good tradeoff so far. I imagine that s3 could do something similar. cheers, Colin On Fri, Dec 14, 2012 at 9:56 AM, Steve Loughran wrote: > The swiftfs tests need only to run if there's a target filesystem; copying > the s3/s3n tests, something like > > > test.fs.swift.name > swift://your-object-store-herel/ > > > How does one actually go about making junit tests optional in mvn-land? > Should the probe/skip logic be in the code -which can make people think the > test passed when it didn't actually run? Or can I turn it on/off in maven? > > -steve
making a hadoop-common test run if a property is set
The swiftfs tests need only to run if there's a target filesystem; copying the s3/s3n tests, something like test.fs.swift.name swift://your-object-store-herel/ How does one actually go about making junit tests optional in mvn-land? Should the probe/skip logic be in the code -which can make people think the test passed when it didn't actually run? Or can I turn it on/off in maven? -steve
[jira] [Created] (HADOOP-9143) repair test org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos
Ivan A. Veselovsky created HADOOP-9143: -- Summary: repair test org.apache.hadoop.fs.http.server.TestHttpFSWithKerberos Key: HADOOP-9143 URL: https://issues.apache.org/jira/browse/HADOOP-9143 Project: Hadoop Common Issue Type: Bug Reporter: Ivan A. Veselovsky Assignee: Ivan A. Veselovsky Some of the test cases in this test class are failing because they are affected by static state changed by the previous test cases. Namely this is the static field org.apache.hadoop.security.UserGroupInformation.loginUser . The suggested patch solves this problem. Besides, the following improvements are done: 1) parametrized the user principal and keytab values via system properties; 2) shutdown of the Jetty server and the minicluster between the test cases is added to make the test methods independent on each other. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira