[jira] [Created] (HDFS-7634) Lazy persist (memory) file should not support truncate currently

2015-01-16 Thread Yi Liu (JIRA)
Yi Liu created HDFS-7634:


 Summary: Lazy persist (memory) file should not support truncate 
currently
 Key: HDFS-7634
 URL: https://issues.apache.org/jira/browse/HDFS-7634
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Yi Liu
Assignee: Yi Liu


Similar with {{append}}, lazy persist (memory) file should not support truncate 
currently. Quote the reason from HDFS-6581 design doc:
{quote}
Appends to files created with the LAZY_PERSISTflag will not be allowed in the 
initial implementation to avoid the complexity of keeping in­memory and on­disk 
replicas in sync on a given DataNode.
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 72 - Still Failing

2015-01-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/72/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6726 lines...]
[INFO] --- maven-source-plugin:2.1.2:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.1.2:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  03:01 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  2.250 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:01 h
[INFO] Finished at: 2015-01-16T14:36:22+00:00
[INFO] Final Memory: 52M/249M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
  /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter428265392894221.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5460360386593944377tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_3808551668838236241224tmp
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-7457
Updating HADOOP-11261
Updating HADOOP-11318
Updating HDFS-7189
Updating HADOOP-11350
Updating YARN-3064
Updating YARN-2861
Updating HADOOP-11483
Updating HDFS-7615
Updating HDFS-7591
Updating HADOOP-8757
Updating HDFS-7581
Updating YARN-3005
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode

Error Message:
expected:0 but was:-3

Stack Trace:
java.lang.AssertionError: expected:0 but was:-3
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.balancer.TestBalancer.testUnknownDatanode(TestBalancer.java:806)


REGRESSION:  
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager.testNumVersionsReportedCorrect

Error Message:
The map of version counts returned by DatanodeManager was not what it was 
expected to be on iteration 433 expected:0 but was:1

Stack Trace:
java.lang.AssertionError: The map of version counts returned by DatanodeManager 
was not what it was expected to be on iteration 433 expected:0 but was:1
at 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #72

2015-01-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/72/changes

Changes:

[aajisaka] HADOOP-11483. HardLink.java should use the jdk7 createLink method

[aajisaka] YARN-3005. [JDK7] Use switch statement for String instead of if-else 
statement in RegistrySecurity.java (Contributed by Kengo Seki)

[aw] HDFS-7581. HDFS documentation needs updating post-shell rewrite (aw)

[aajisaka] HADOOP-11318. Update the document for hadoop fs -stat

[arp] HDFS-7591. hdfs classpath command should support same options as hadoop 
classpath. (Contributed by Varun Saxena)

[jianhe] YARN-2861. Fixed Timeline DT secret manager to not reuse RM's configs. 
Contributed by Zhijie Shen

[rkanter] HADOOP-8757. Metrics should disallow names with invalid characters 
(rchiang via rkanter)

[kihwal] HDFS-7615. Remove longReadLock. Contributed by Kihwal Lee.

[kihwal] HDFS-7457. DatanodeID generates excessive garbage. Contributed by 
Daryn Sharp.

[wheat9] HADOOP-11350. The size of header buffer of HttpServer is too small 
when HTTPS is enabled. Contributed by Benoy Antony.

[yliu] HDFS-7189. Add trace spans for DFSClient metadata operations. (Colin P. 
McCabe via yliu)

[junping_du] YARN-3064. 
TestRMRestart/TestContainerResourceUsage/TestNodeManagerResync failure with 
allocation timeout. (Contributed by Jian He)

[stevel] HADOOP-11261 Set custom endpoint for S3A. (Thomas Demoor via stevel)

--
[...truncated 6533 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.328 sec - in 
org.apache.hadoop.hdfs.TestLeaseRenewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 162.921 sec - 
in org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.807 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppend4
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.173 sec - in 
org.apache.hadoop.hdfs.TestFileAppend4
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.694 sec - in 
org.apache.hadoop.hdfs.TestParallelRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.382 sec - in 
org.apache.hadoop.hdfs.TestClose
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.419 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.806 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.348 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.675 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.322 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.206 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.999 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Java HotSpot(TM) 64-Bit Server VM warning: ignoring 

[jira] [Resolved] (HDFS-7591) hdfs classpath command should support same options as hadoop classpath.

2015-01-16 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-7591.
-
   Resolution: Fixed
Fix Version/s: (was: 3.0.0)
   2.7.0

Thanks for the clarification. I verified the patch on branch-2.

Committed to branch-2. Thank you for the contribution [~varun_saxena]!

 hdfs classpath command should support same options as hadoop classpath.
 ---

 Key: HDFS-7591
 URL: https://issues.apache.org/jira/browse/HDFS-7591
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: scripts
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7591.001.patch, HDFS-7591.002.patch, 
 HDFS-7591.branch-2.patch


 HADOOP-10903 enhanced the {{hadoop classpath}} command to support optional 
 expansion of the wildcards and bundling the classpath into a jar file 
 containing a manifest with the Class-Path attribute.  The other classpath 
 commands should do the same for consistency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7635) Remove TestCorruptFilesJsp from branch-2.

2015-01-16 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-7635:
---

 Summary: Remove TestCorruptFilesJsp from branch-2.
 Key: HDFS-7635
 URL: https://issues.apache.org/jira/browse/HDFS-7635
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


HDFS-6252 removed corrupt_files.jsp, but there is still a test suite named 
{{TestCorruptFilesJsp}} in branch-2.  The tests attempt to call 
corrupt_files.jsp and fail on an HTTP 404 error response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7635) Remove TestCorruptFilesJsp from branch-2.

2015-01-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-7635.
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed

Arpit, thank you for the quick review.  I have committed this to branch-2.

 Remove TestCorruptFilesJsp from branch-2.
 -

 Key: HDFS-7635
 URL: https://issues.apache.org/jira/browse/HDFS-7635
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7635-branch-2.001.patch


 HDFS-6252 removed corrupt_files.jsp, but there is still a test suite named 
 {{TestCorruptFilesJsp}} in branch-2.  The tests attempt to call 
 corrupt_files.jsp and fail on an HTTP 404 error response.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7636) Support sorting by columns in the Datanode Information tables of the NameNode web UI.

2015-01-16 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-7636:
---

 Summary: Support sorting by columns in the Datanode Information 
tables of the NameNode web UI.
 Key: HDFS-7636
 URL: https://issues.apache.org/jira/browse/HDFS-7636
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Chris Nauroth
Assignee: Chris Nauroth


During discussion of HDFS-7604, we mentioned that it would be nice to be able 
to sort the Datanode Information tables by count of Failed Volumes.  This issue 
proposes to implement sorting by column in these tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7636) Support sorting by columns in the Datanode Information tables of the NameNode web UI.

2015-01-16 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HDFS-7636.
-
Resolution: Duplicate

Oops, this is a duplicate of HDFS-6407.  Resolving.

 Support sorting by columns in the Datanode Information tables of the NameNode 
 web UI.
 -

 Key: HDFS-7636
 URL: https://issues.apache.org/jira/browse/HDFS-7636
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Chris Nauroth
Assignee: Chris Nauroth

 During discussion of HDFS-7604, we mentioned that it would be nice to be able 
 to sort the Datanode Information tables by count of Failed Volumes.  This 
 issue proposes to implement sorting by column in these tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)