Re: [VOTE] Release Apache Hadoop 2.3.0

2014-02-14 Thread Chris Douglas
+1 (binding)

Verified checksum, signature. Built from src, poked at single-node
cluster, ran some unit tests. -C

On Tue, Feb 11, 2014 at 6:49 AM, Arun C Murthy  wrote:
> Folks,
>
> I've created a release candidate (rc0) for hadoop-2.3.0 that I would like to 
> get released.
>
> The RC is available at: http://people.apache.org/~acmurthy/hadoop-2.3.0-rc0
> The RC tag in svn is here: 
> https://svn.apache.org/repos/asf/hadoop/common/tags/release-2.3.0-rc0
>
> The maven artifacts are available via repository.apache.org.
>
> Please try the release and vote; the vote will run for the usual 7 days.
>
> thanks,
> Arun
>
> PS: Thanks to Andrew, Vinod & Alejandro for all their help in various release 
> activities.
> --
> CONFIDENTIALITY NOTICE
> NOTICE: This message is intended for the use of the individual or entity to
> which it is addressed and may contain information that is confidential,
> privileged and exempt from disclosure under applicable law. If the reader
> of this message is not the intended recipient, you are hereby notified that
> any printing, copying, dissemination, distribution, disclosure or
> forwarding of this communication is strictly prohibited. If you have
> received this communication in error, please contact the sender immediately
> and delete it from your system. Thank You.


[jira] [Created] (HDFS-5956) A file size is multiplied by the replication factor in 'hdfs oiv -p FileDistribution' option

2014-02-14 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-5956:
---

 Summary: A file size is multiplied by the replication factor in 
'hdfs oiv -p FileDistribution' option
 Key: HDFS-5956
 URL: https://issues.apache.org/jira/browse/HDFS-5956
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: tools
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA


In FileDistributionCalculator.java, 
{code}
long fileSize = 0;
for (BlockProto b : f.getBlocksList()) {
  fileSize += b.getNumBytes() * f.getReplication();
}
maxFileSize = Math.max(fileSize, maxFileSize);
totalSpace += fileSize;
{code}
should be
{code}
long fileSize = 0;
for (BlockProto b : f.getBlocksList()) {
  fileSize += b.getNumBytes();
}
maxFileSize = Math.max(fileSize, maxFileSize);
totalSpace += fileSize * f.getReplication();
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5955) branch-2 fails to compile

2014-02-14 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-5955:
---

 Summary: branch-2 fails to compile
 Key: HDFS-5955
 URL: https://issues.apache.org/jira/browse/HDFS-5955
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build
Affects Versions: 2.4.0
Reporter: Arpit Agarwal
Priority: Critical


I get the following error compiling branch-2.
{code}
Picked up _JAVA_OPTIONS: -Djava.awt.headless=true
[ERROR] COMPILATION ERROR :
[ERROR] 
/Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
 cannot find symbol
symbol  : method isSecure()
location: class org.apache.hadoop.http.HttpConfig
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:2.5.1:compile (default-compile) 
on project hadoop-common: Compilation failure
[ERROR] 
/Users/aagarwal/src/hdp2/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer.java:[223,20]
 cannot find symbol
[ERROR] symbol  : method isSecure()
[ERROR] location: class org.apache.hadoop.http.HttpConfig
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5585) Provide admin commands for data node upgrade

2014-02-14 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-5585.
--

   Resolution: Fixed
Fix Version/s: HDFS-5535 (Rolling upgrades)
 Hadoop Flags: Reviewed

Thanks for the review, Vinay and Brandon. I've committed this to the RU branch.

> Provide admin commands for data node upgrade
> 
>
> Key: HDFS-5585
> URL: https://issues.apache.org/jira/browse/HDFS-5585
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ha, hdfs-client, namenode
>Reporter: Kihwal Lee
>Assignee: Kihwal Lee
> Fix For: HDFS-5535 (Rolling upgrades)
>
> Attachments: HDFS-5585.patch, HDFS-5585.patch, HDFS-5585.patch
>
>
> Several new methods to ClientDatanodeProtocol may need to be added to support 
> querying version, initiating upgrade, etc.  The admin CLI needs to be added 
> as well. This primary use case is for rolling upgrade, but this can be used 
> for preparing for a graceful restart of a data node for any reasons.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5954) Merge Protobuf-based-FSImage code from trunk

2014-02-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5954.
-

  Resolution: Fixed
   Fix Version/s: HDFS-5535 (Rolling upgrades)
Target Version/s: HDFS-5535 (Rolling upgrades)
Hadoop Flags: Reviewed

+1 for the patch. I committed it to branch HDFS-5535. Thanks for taking care of 
this Jing!

> Merge Protobuf-based-FSImage code from trunk
> 
>
> Key: HDFS-5954
> URL: https://issues.apache.org/jira/browse/HDFS-5954
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode, ha, hdfs-client, namenode
>Reporter: Jing Zhao
>Assignee: Jing Zhao
> Fix For: HDFS-5535 (Rolling upgrades)
>
> Attachments: HDFS-5954.patch
>
>
> After merging the protobuf-based-fsimage code from trunk, we need to fix some 
> compilation errors.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5954) Merge Protobuf-based-FSImage code from trunk

2014-02-14 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-5954:
---

 Summary: Merge Protobuf-based-FSImage code from trunk
 Key: HDFS-5954
 URL: https://issues.apache.org/jira/browse/HDFS-5954
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


After merging the protobuf-based-fsimage code from trunk, we need to fix some 
compilation errors.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Resolved] (HDFS-5907) BlockPoolSliceStorage trash to handle block deletions during rolling upgrade

2014-02-14 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDFS-5907.
-

  Resolution: Fixed
   Fix Version/s: HDFS-5535 (Rolling upgrades)
Target Version/s: HDFS-5535 (Rolling upgrades)
Hadoop Flags: Reviewed

Thanks Suresh! I committed this to branch HDFS-5535 as r1568346.

> BlockPoolSliceStorage trash to handle block deletions during rolling upgrade
> 
>
> Key: HDFS-5907
> URL: https://issues.apache.org/jira/browse/HDFS-5907
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: datanode
>Affects Versions: HDFS-5535 (Rolling upgrades)
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Fix For: HDFS-5535 (Rolling upgrades)
>
> Attachments: HDFS-5907.01.patch, HDFS-5907.02.patch, 
> HDFS-5907.04.patch, HDFS-5907.05.patch, HDFS-5907.06.patch, patch-v04-v05.diff
>
>
> DN changes when a rolling upgrade is in progress:
> # DataNode should handle block deletions by moving block files to 'trash'.
> # Block files should be restored to their original locations during a 
> rollback.
> # Purge trash when the rolling upgrade is finalized.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HDFS-5953) TestBlockReaderFactory fails in trunk

2014-02-14 Thread Ted Yu (JIRA)
Ted Yu created HDFS-5953:


 Summary: TestBlockReaderFactory fails in trunk
 Key: HDFS-5953
 URL: https://issues.apache.org/jira/browse/HDFS-5953
 Project: Hadoop HDFS
  Issue Type: Test
Reporter: Ted Yu


>From 
>https://builds.apache.org/job/Hadoop-Hdfs-trunk/1673/testReport/junit/org.apache.hadoop.hdfs/TestBlockReaderFactory/testFallbackFromShortCircuitToUnixDomainTraffic/
> :
{code}
java.lang.RuntimeException: Although a UNIX domain socket path is configured as 
/tmp/socks.1392383436573.1418778351/testFallbackFromShortCircuitToUnixDomainTraffic._PORT,
 we cannot start a localDataXceiverServer because libhadoop cannot be loaded.
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:601)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:573)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:769)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:315)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1864)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:359)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at 
org.apache.hadoop.hdfs.TestBlockReaderFactory.testFallbackFromShortCircuitToUnixDomainTraffic(TestBlockReaderFactory.java:99)
{code}
This test failure can be reproduced locally (on Mac).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Build failed in Jenkins: Hadoop-Hdfs-trunk #1673

2014-02-14 Thread Apache Jenkins Server
See 

Changes:

[jing9] Move Flatten INode hierarchy jiras (HDFS-5531, HDFS-5285, HDFS-5286, 
HDFS-5537, HDFS-5554, HDFS-5647, HDFS-5632, HDFS-5715, HDFS-5726) to 2.4.0 
section in CHANGES.txt

[brandonli] HDFS-5901

[brandonli] HDFS-5934. New Namenode UI back button doesn't work as expected. 
Contributed by Travis Thompson

[suresh] HADOOP-10249. LdapGroupsMapping should trim ldap password read from 
file. Contributed by Dilli Armugam.

[jlowe] MAPREDUCE-5670. CombineFileRecordReader should report progress when 
moving to the next file. Contributed by Chen He

[brandonli] HDFS-5913. Nfs3Utils#getWccAttr() should check attr parameter 
against null. Contributed by Brandon Li

[vinodkv] YARN-1417. Modified RM to generate container-tokens not at creation 
time, but at allocation time so as to prevent RM
from shelling out containers with expired tokens. Contributed by Omkar Vinit 
Joshi and Jian He.

[arpit] HADOOP-10343. Change info to debug log in LossyRetryInvocationHandler. 
Contributed by Arpit Gupta

[vinodkv] YARN-1676. Modified RM HA handling of user-to-group mappings to be 
available across RM failover by making using of a remote 
configuration-provider. Contributed by Xuan Gong.

[cnauroth] HDFS-5941. add dfs.namenode.secondary.https-address and 
dfs.namenode.secondary.https-address in hdfs-default.xml. Contributed by Haohui 
Mai.

[kihwal] HDFS-5904. TestFileStatus fails intermittently. Contributed by Mit 
Desai.

--
[...truncated 11918 lines...]
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.75 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.696 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Tests run: 10, Failures: 0, Errors: 0, Skipped: 10, Time elapsed: 0.202 sec - 
in org.apache.hadoop.hdfs.TestShortCircuitLocalRead
Running org.apache.hadoop.hdfs.TestRestartDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.962 sec - in 
org.apache.hadoop.hdfs.TestRestartDFS
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.769 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestDFSRemove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.334 sec - in 
org.apache.hadoop.hdfs.TestDFSRemove
Running org.apache.hadoop.hdfs.TestHDFSTrash
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.278 sec - in 
org.apache.hadoop.hdfs.TestHDFSTrash
Running org.apache.hadoop.hdfs.TestClientReportBadBlock
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 70.264 sec - in 
org.apache.hadoop.hdfs.TestClientReportBadBlock
Running org.apache.hadoop.hdfs.TestHttpPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.263 sec - in 
org.apache.hadoop.hdfs.TestHttpPolicy
Running org.apache.hadoop.hdfs.TestQuota
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.461 sec - in 
org.apache.hadoop.hdfs.TestQuota
Running org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.312 sec - in 
org.apache.hadoop.hdfs.TestFileLengthOnClusterRestart
Running org.apache.hadoop.hdfs.TestDatanodeRegistration
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.434 sec - in 
org.apache.hadoop.hdfs.TestDatanodeRegistration
Running org.apache.hadoop.hdfs.TestAbandonBlock
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.99 sec - in 
org.apache.hadoop.hdfs.TestAbandonBlock
Running org.apache.hadoop.hdfs.TestDFSShell
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.421 sec - 
in org.apache.hadoop.hdfs.TestDFSShell
Running org.apache.hadoop.hdfs.TestListFilesInDFS
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.577 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 0.174 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestPeerCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.33 sec - in 
org.apache.hadoop.hdfs.TestPeerCache
Running org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Tests run: 3, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 8.592 sec - in 
org.apache.hadoop.hdfs.TestAppendDifferentChecksum
Running org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.502 sec - in 
org.apache.hadoop.hdfs.TestDFSClientExcludedNodes
Running org.apache.hadoop.hdfs.TestDatanodeBlockScanner
Tests run: 7, Fa

Hadoop-Hdfs-trunk - Build # 1673 - Still Failing

2014-02-14 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/1673/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 12111 lines...]
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.6:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:2.3.2:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] ** FindBugsMojo execute ***
[INFO] canGenerate is false
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE 
[1:47:41.183s]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [2.120s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 1:47:44.684s
[INFO] Finished at: Fri Feb 14 13:22:01 UTC 2014
[INFO] Final Memory: 33M/338M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.16:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HDFS-5554
Updating HDFS-5285
Updating HDFS-5286
Updating HDFS-5531
Updating YARN-1417
Updating YARN-1676
Updating HDFS-5632
Updating HDFS-5647
Updating HDFS-5715
Updating HDFS-5726
Updating HADOOP-10343
Updating HADOOP-10249
Updating HDFS-5537
Updating HDFS-5904
Updating HDFS-5941
Updating HDFS-5913
Updating MAPREDUCE-5670
Updating HDFS-5901
Updating HDFS-5934
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestBlockReaderFactory.testFallbackFromShortCircuitToUnixDomainTraffic

Error Message:
Although a UNIX domain socket path is configured as 
/tmp/socks.1392383436573.1418778351/testFallbackFromShortCircuitToUnixDomainTraffic._PORT,
 we cannot start a localDataXceiverServer because libhadoop cannot be loaded.

Stack Trace:
java.lang.RuntimeException: Although a UNIX domain socket path is configured as 
/tmp/socks.1392383436573.1418778351/testFallbackFromShortCircuitToUnixDomainTraffic._PORT,
 we cannot start a localDataXceiverServer because libhadoop cannot be loaded.
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.getDomainPeerServer(DataNode.java:601)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:573)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:769)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:315)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1864)
at 
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1764)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1243)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:699)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:359)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:340)
at 
org.apache.hadoop.hdfs.TestBlockReaderFactory.testFallbackFromShortCircuitToUnixDomainTraffic(TestBlockReaderFactory.java:99)


FAILED:  
org.apache.hadoop.hdfs.TestBlockReaderFactory.testMultipleWaitersOnShortCircuitCache

Error Message:
Although a UNIX domain socket path is configured as 
/tmp/socks.1392383438848.-530819043/testMultipleWaitersOnShortCircuitCache._PORT,
 we cannot start a localDataXceiverServer because lib