Build failed in Jenkins: Hadoop-Hdfs-trunk #1692

2014-03-05 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1692/changes

Changes:

[raviprak] YARN-1768. Fixed error message being too verbose when killing a 
non-existent application. Contributed by Tsuyoshi Ozawa

[wheat9] HADOOP-10379. Protect authentication cookies with the HttpOnly and 
Secure flags. Contributed by Haohui Mai.

[wheat9] HDFS-5321. Clean up the HTTP-related configuration in HDFS. 
Contributed by Haohui Mai.

[wheat9] HADOOP-8691. FsShell can print Found xxx items unnecessarily often. 
Contributed By Daryn Sharp.

[szetszwo] svn merge --reintegrate 
https://svn.apache.org/repos/asf/hadoop/common/branches/HDFS-5535 back to trunk.

[vinodkv] YARN-1766. Fixed a bug in ResourceManager to use configuration loaded 
from the configuration-provider when booting up. Contributed by Xuan Gong.

[cmccabe] HDFS-6051. HDFS cannot run on Windows since short-circuit memory 
segment changes (cmccabe)

[stack] HDFS-6047 TestPread NPE inside in DFSInputStream 
hedgedFetchBlockByteRange (stack)

[vinodkv] YARN-986. Changed client side to be able to figure out the right RM 
Delegation token for the right ResourceManager when HA is enabled. Contributed 
by Karthik Kambatla.

[vinodkv] YARN-1730. Implemented simple write-locking in the LevelDB based 
timeline-store. Contributed by Billie Rinaldi.

--
[...truncated 12784 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.726 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestFSNamesystem
Running org.apache.hadoop.hdfs.server.namenode.TestStreamFile
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.208 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestStreamFile
Running org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.051 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestNNStorageRetentionManager
Running org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.238 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestCorruptFilesJsp
Running org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.039 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestBlockUnderConstruction
Running org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.388 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestFailureOfSharedDir
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.127 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.59 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.953 sec - 
in org.apache.hadoop.hdfs.server.namenode.ha.TestHAStateTransitions
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.119 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestStateTransitionFailure
Running org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.543 sec - 
in org.apache.hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.288 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints
Running org.apache.hadoop.hdfs.server.namenode.ha.TestHAWebUI
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.601 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestHAWebUI
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 46.646 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDNFencingWithReplication
Running org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.434 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapStandby
Running org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.457 sec - in 
org.apache.hadoop.hdfs.server.namenode.ha.TestDelegationTokensWithHA
Running org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.063 sec - 
in org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover
Running 

Hadoop-Hdfs-trunk - Build # 1692 - Still Failing

2014-03-05 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1692/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12977 lines...]
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.3:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[2:07:31.743s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [3.258s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 2:07:36.770s
[INFO] Finished at: Wed Mar 05 13:43:21 UTC 2014
[INFO] Final Memory: 33M/317M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-5535
Updating HDFS-5321
Updating HDFS-6051
Updating YARN-1766
Updating YARN-1730
Updating YARN-986
Updating HDFS-6047
Updating HADOOP-10379
Updating HADOOP-8691
Updating YARN-1768
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  org.apache.hadoop.cli.TestAclCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to 
identify the command that failed
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at org.apache.hadoop.cli.TestAclCLI.tearDown(TestAclCLI.java:49)


REGRESSION:  org.apache.hadoop.cli.TestHDFSCLI.testAll

Error Message:
One of the tests failed. See the Detailed results to identify the command that 
failed

Stack Trace:
java.lang.AssertionError: One of the tests failed. See the Detailed results to 
identify the command that failed
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.hadoop.cli.CLITestHelper.displayResults(CLITestHelper.java:263)
at org.apache.hadoop.cli.CLITestHelper.tearDown(CLITestHelper.java:125)
at org.apache.hadoop.cli.TestHDFSCLI.tearDown(TestHDFSCLI.java:85)


REGRESSION:  
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache.testUncacheQuiesces

Error Message:
Bad value for metric BlocksCached expected:1 but was:2

Stack Trace:
java.lang.AssertionError: Bad value for metric BlocksCached expected:1 but 
was:2
at org.junit.Assert.fail(Assert.java:93)
at 

[jira] [Created] (HDFS-6057) DomainSocketWatcher.watcherThread should be marked as daemon thread

2014-03-05 Thread Eric Sirianni (JIRA)
Eric Sirianni created HDFS-6057:
---

 Summary: DomainSocketWatcher.watcherThread should be marked as 
daemon thread
 Key: HDFS-6057
 URL: https://issues.apache.org/jira/browse/HDFS-6057
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.4.0
Reporter: Eric Sirianni
Priority: Minor


Running the tip of {{branch-2.4}}, I'm observing some zombie processes in my 
environment.  jstack shows the following thread preventing the JVM from 
shutting down:

{noformat}
Thread-3 prio=10 tid=0x7f0a908a5800 nid=0x3ee9 runnable 
[0x7f0a89471000]
   java.lang.Thread.State: RUNNABLE
at org.apache.hadoop.net.unix.DomainSocketWatcher.doPoll0(Native Method)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher.access$800(DomainSocketWatcher.java:52)
at 
org.apache.hadoop.net.unix.DomainSocketWatcher$1.run(DomainSocketWatcher.java:455)
at java.lang.Thread.run(Thread.java:679)
{noformat}

Marking the {{DomainSocketWatcher.watcherThread}} as a daemon thread would 
prevent this situation.  Is there any reason it isn't classified as such?

Also, tracing through the code, I don't see any code path where 
{{DomainSocketWatcher.close()}} is invoked (though this would seem to be a 
larger issue -- maybe I'm missing something...).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6058) Fix TestHDFSCLI failures after HADOOP-8691 change

2014-03-05 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-6058:
---

 Summary: Fix TestHDFSCLI failures after HADOOP-8691 change
 Key: HDFS-6058
 URL: https://issues.apache.org/jira/browse/HDFS-6058
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Vinayakumar B


HADOOP-8691 changed the ls command output.

TestHDFSCLI needs to be updated after this change,

Latest precommit builds are failing because of this.
https://builds.apache.org/job/PreCommit-HDFS-Build/6305//testReport/



--
This message was sent by Atlassian JIRA
(v6.2#6252)


FileSystem and FileContext Janitor, at your service !

2014-03-05 Thread Jay Vyas
Hi HCFS Community :)

This is Jay...  Some of you know me I hack on a broad range of file
system and hadoop ecosystem interoperability stuff.  I just wanted to
introduce myself and let you folks know im going to be working to help
clean up the existing unit testing frameworks for the FileSystem and
FileContext APIs.  I've listed some bullets below .

- byte code inspection based code coverage for file system APIs with a tool
such as corbertura.

- HADOOP-9361 points out that there are many different types of file
systems.

- Creating mock file systems which can be used to validate API tests, which
emulate different FS semantics (atomic directory creation, eventual
consistency, strict consistency, POSIX compliance, append support, etc...)

Is anyone interested in the above issues or have any opinions on how /
where i should get started?

Our end goal is to have a more transparent and portable set of test APIs
for the hadoop file system implementors, across the board : so that we can
all test our individual implementations confidently.

So, anywhere i can lend a hand - let me know.  I think this effort will
require all of us in the file system community to join forces, and it will
benefit us all immensly in the long run as well.

-- 
Jay Vyas
http://jayunit100.blogspot.com


[jira] [Resolved] (HDFS-6052) TestBlockReaderLocal fails if native code is not loaded or platform is Windows.

2014-03-05 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-6052.
-

Resolution: Duplicate

I'm resolving this as duplicate of HDFS-6059.  (Technically, this one came 
first, but there is more activity on HDFS-6059 at this point.)

 TestBlockReaderLocal fails if native code is not loaded or platform is 
 Windows.
 ---

 Key: HDFS-6052
 URL: https://issues.apache.org/jira/browse/HDFS-6052
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client, test
Affects Versions: 3.0.0, 2.4.0
Reporter: Chris Nauroth

 Since the HDFS-5950 patch, {{TestBlockReaderLocal}} directly instantiates 
 {{ShortCircuitShm}}.  This fails if running on a platform where the native 
 code is not loaded.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6060) NameNode should not check DataNode layout version

2014-03-05 Thread Brandon Li (JIRA)
Brandon Li created HDFS-6060:


 Summary: NameNode should not check DataNode layout version
 Key: HDFS-6060
 URL: https://issues.apache.org/jira/browse/HDFS-6060
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Brandon Li
Assignee: Brandon Li


In current code, NameNode allows DataNode layout version to be different only 
when the NameNode is in rolling upgrade mode. DataNode can't register with 
NameNode when only DataNode is to be upgraded with a layout version different 
with that on NameNode.

NameNode should not check DataNode layout version in any cases. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6061) Allow dfs.datanode.shared.file.descriptor.path to contain multiple entries and fall back when needed

2014-03-05 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-6061:
--

 Summary: Allow dfs.datanode.shared.file.descriptor.path to contain 
multiple entries and fall back when needed
 Key: HDFS-6061
 URL: https://issues.apache.org/jira/browse/HDFS-6061
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


In {{SharedFileDescriptorFactory}}, it would be nice if 
dfs.datanode.shared.file.descriptor.path could contain multiple entries and 
allow fallback between them.  That way, if a user doesn't have {{/dev/shm}} 
configured (the current default), the user could use {{/tmp}}.  This is mainly 
a concern on BSDs and some other systems where {{/dev/shm}} doesn't exist.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HDFS-6062) TestRetryCacheWithHA#testConcat is flaky

2014-03-05 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-6062:
---

 Summary: TestRetryCacheWithHA#testConcat is flaky
 Key: HDFS-6062
 URL: https://issues.apache.org/jira/browse/HDFS-6062
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor
 Attachments: HDFS-6062.000.patch

After adding retry cache metrics check, TestRetryCacheWithHA#testConcat can 
fail (https://builds.apache.org/job/PreCommit-HDFS-Build/6313//testReport/).

The reason is that the test uses dfs.exists(targetPath) to check whether 
concat has been done in the original active NN. However, since we create the 
target file in the beginning, the check always returns true. Thus it is 
possible that the concat is processed in the new active NN for the first time. 
And in this case the retry cache will not be hit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)