[jira] [Created] (HDFS-9565) TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky

2015-12-16 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-9565:
-

 Summary: 
TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes is flaky
 Key: HDFS-9565
 URL: https://issues.apache.org/jira/browse/HDFS-9565
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: fs, test
Affects Versions: 3.0.0
 Environment: Jenkins
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang
Priority: Minor


TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes occasionally 
fails with the following error:
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testLocatedFileStatusStorageIdsTypes/
{noformat}
FAILED:  
org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes

Error Message:
Unexpected num storage ids expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: Unexpected num storage ids expected:<2> but was:<1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testLocatedFileStatusStorageIdsTypes(TestDistributedFileSystem.java:855)

{noformat}

It appears that this test failed due to race condition: it does not wait for 
the file replication to finish, before checking the file's status. 

This flaky test can be fixed by using DFSTestUtil.waitForReplication()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2633 - Still Failing

2015-12-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2633/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7622 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [07:18 min]
[INFO] Apache Hadoop HDFS  FAILURE [  05:32 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.186 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 05:40 h
[INFO] Finished at: 2015-12-16T20:32:19+00:00
[INFO] Final Memory: 59M/839M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
6 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement.testBlockReplacement

Error Message:
Did not achieve expected replication to expected nodes after more than 2 
msec.  See logs for details.

Stack Trace:
java.util.concurrent.TimeoutException: Did not achieve expected replication to 
expected nodes after more than 2 msec.  See logs for details.
at 
org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement.checkBlocks(TestBlockReplacement.java:297)
at 
org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement.testBlockReplacement(TestBlockReplacement.java:203)


FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling

Error Message:
test timed out after 30 milliseconds

Stack Trace:
java.lang.Exception: test timed out after 30 milliseconds
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at 
org.apache.hadoop.hdfs.DataStreamer.waitAndQueuePacket(DataStreamer.java:805)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacket(DFSOutputStream.java:423)
at 
org.apache.hadoop.hdfs.DFSOutputStream.enqueueCurrentPacketFull(DFSOutputStream.java:432)
at 
org.apache.hadoop.hdfs.DFSOutputStream.writeChunk(DFSOutputStream.java:418)
at 
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217)
at 

Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Vinod Kumar Vavilapalli
So, the original voting mail mentions we are voting on release-2.6.3-RC0 tag.

Are we still doing that? What are the RC0.1 and RC1 tags doing then?

+Vinod

> On Dec 16, 2015, at 2:13 AM, Junping Du  wrote:
> 
> Thanks Akira for notice this. I don't think we can remove these tags as they 
> should be immutable as branches. I created these duplicated tags as after I 
> cut off RC0 out, some commit lands on 2.6.3 unexpected but I didn't realize I 
> could still push to original tag by forcefully. The best thing I can do then 
> is to make them point to the same commit as now it is.
> 
> Thanks,
> 
> Junping
> 
> From: Akira AJISAKA 
> Sent: Wednesday, December 16, 2015 6:41 AM
> To: common-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
> mapreduce-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
> 
> Thanks Junping for starting release process.
> I noticed there are duplicated tags:
> 
> * release-2.6.3-RC0
> * release-2.6.3-RC0.1
> * release-2.6.3-RC1
> 
> Could you remove RC0.1 and RC1?
> 
> Regards,
> Akira
> 
> On 12/16/15 10:17, yliu wrote:
>> Thanks Junping, +1.
>> Download the tarball and deploy a small HDFS/YARN cluster, and verify few
>> basic functionalities.
>> 
>> Regards,
>> Yi Liu
>> 
>> On Wed, Dec 16, 2015 at 6:42 AM, Chang Li  wrote:
>> 
>>> Thanks Junping, + 1(non binding). Downloaded the tarball, compiled and
>>> built locally. Ran some MR jobs successfully.
>>> 
>>> Best,
>>> Chang
>>> 
>>> On Tue, Dec 15, 2015 at 3:17 PM, Wangda Tan  wrote:
>>> 
 Thanks Junping,
 
 +1 (binding). Deploy a cluster locally, run distributed shell and MR job,
 both successfully finished.
 
 Regards,
 Wangda
 
 
 On Tue, Dec 15, 2015 at 12:43 PM, Naganarasimha Garla <
 naganarasimha...@gmail.com> wrote:
 
> Hi Junping,
> 
> +0 (non binding)
> 
> Though everything else is working  fine (downloaded the tar ball and
> installed single node cluster setup and  verified  few MR jobs),
> submission of Unmanaged AM is getting the RM down. YARN-4452 has
> already been raised and i am working on it. Will provide the patch for
> the trunk and the 2.6.3 version asap
> 
> Regards,
> 
> + Naga
> 
> 
> 
> Thanks for the work Junping! Downloaded the src tarball. Built locally
> and successfully ran
> in single node mode with a few map reduce jobs. LGTM.
> 
> Li Lu
> 
> On Dec 14, 2015, at 04:23, Junping Du
> >
> wrote:
> 
> ?Thanks Sarjeet and Tsuyoshi for reporting this. I just fix permission
> issue and download
> should work now. Please try to download it again. Thanks!
> 
> 
> Thanks,
> 
> 
> Junping
> 
> 
> 
> From: sarjeet singh >
> Sent: Sunday, December 13, 2015 6:44 PM
> To: common-...@hadoop.apache.org
> Cc: mapreduce-...@hadoop.apache.org>>> mapreduce-...@hadoop.apache.org
>> ;
> hdfs-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org;
> junping...@apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
> 
> I am also getting the same error when downloading tar.gz:
> 
> "You don't have permission to access
> /~junping_du/hadoop-2.6.3-RC0/hadoop-2.6.3-RC0-src.tar.gz
> on this server."
> 
> - Sarjeet Singh
> 
> On Sat, Dec 12, 2015 at 4:17 PM, Tsuyoshi Ozawa
> >
> wrote:
> Hi Junping,
> 
> Thank you for starting the voting.
> I cannot access the tar.gz file because of permission error. Could you
> check the permission to access the files?
> 
> Forbidden
> You don't have permission to access
> /~junping_du/hadoop-2.6.3-RC0/hadoop-2.6.3-RC0-src.tar.gz
> on this server.
> 
> Thanks,
> - Tsuyoshi
> 
> On Sat, Dec 12, 2015 at 9:16 AM, Junping Du
>  j...@hortonworks.com>>
> wrote:
> 
> Hi all developers in hadoop community,
>   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the
> next maintenance release
> to follow up 2.6.2.) according to email thread of release plan 2.6.3
> [1]. Sorry for this RC
> coming a bit late as several blocker issues were getting committed
> until yesterday. Below
> is the details:
> 
> The RC is available for validation at:
> *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/

[jira] [Resolved] (HDFS-3356) When dfs.block.size is configured to 0 the block which is created in rbw is never deleted

2015-12-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-3356.
--
Resolution: Won't Fix

This is not an issue since we now have minimum block size enforcement.

> When dfs.block.size is configured to 0 the block which is created in rbw is 
> never deleted
> -
>
> Key: HDFS-3356
> URL: https://issues.apache.org/jira/browse/HDFS-3356
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.0.0-alpha
>Reporter: J.Andreina
>Priority: Minor
>
> dfs.block.size=0
> step 1: start NN and DN
> step 2: write a file "a.txt"
> The block is created in rbw and since the blocksize is 0 write fails and the 
> file is not closed. DN sents in the block report , number of blocks as 1
> Even after the DN has sent the block report and directory scan has been done 
> , the block is not invalidated for ever.
> But In earlier version when the block.size is configured to 0 default value 
> will be taken and write will be successful.
> NN logs:
> 
> {noformat}
> 2012-04-24 19:54:27,089 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> processReport: from DatanodeRegistration(.18.40.117, 
> storageID=DS-452047493-xx.xx.xx.xx-50076-1335277451277, infoPort=50075, 
> ipcPort=50077, 
> storageInfo=lv=-40;cid=CID-742fda5f-68f7-40a5-9d52-a2a15facc6af;nsid=797082741;c=0),
>  blocks: 0, processing time: 0 msecs
> 2012-04-24 19:54:29,689 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> NameSystem.allocateBlock: /1._COPYING_. 
> BP-1612285678-xx.xx.xx.xx-1335277427136 
> blk_-262107679534121671_1002{blockUCState=UNDER_CONSTRUCTION, 
> primaryNodeIndex=-1, 
> replicas=[ReplicaUnderConstruction[xx.xx.xx.xx:50076|RBW]]}
> 2012-04-24 19:54:30,113 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* 
> processReport: from DatanodeRegistration(xx.xx.xx.xx, 
> storageID=DS-452047493-xx.xx.xx.xx-50076-1335277451277, infoPort=50075, 
> ipcPort=50077, 
> storageInfo=lv=-40;cid=CID-742fda5f-68f7-40a5-9d52-a2a15facc6af;nsid=797082741;c=0),
>  blocks: 1, processing time: 0 msecs{noformat}
> Exception message while writing a file:
> ===
> {noformat}
> ./hdfs dfs -put hadoop /1
> 12/04/24 19:54:30 WARN hdfs.DFSClient: DataStreamer Exception
> java.io.IOException: BlockSize 0 is smaller than data size.  Offset of packet 
> in block 4745 Aborting file /1._COPYING_
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:467)
> put: BlockSize 0 is smaller than data size.  Offset of packet in block 4745 
> Aborting file /1._COPYING_
> 12/04/24 19:54:30 ERROR hdfs.DFSClient: Failed to close file /1._COPYING_
> java.io.IOException: BlockSize 0 is smaller than data size.  Offset of packet 
> in block 4745 Aborting file /1._COPYING_
>   at 
> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:467){noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9563) DiskBalancer: Refactor Plan Command

2015-12-16 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-9563:
---

 Summary: DiskBalancer: Refactor Plan Command
 Key: HDFS-9563
 URL: https://issues.apache.org/jira/browse/HDFS-9563
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: 2.8.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


It's quite helpful to do:
1) report node information for the top X of DataNodes that will benefit from 
running disk balancer
2) report volume level information for any specific DataNode. 

This is done by:
1) reading the cluster info, sorting the DiskbalancerNodes by their 
NodeDataDensity and printing out their corresponding information.
2) reading the cluster info, and print out volume level information for that 
DataNode requested.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9564) DiskBalancer: Refactor Execute Command

2015-12-16 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-9564:
---

 Summary: DiskBalancer: Refactor Execute Command
 Key: HDFS-9564
 URL: https://issues.apache.org/jira/browse/HDFS-9564
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer & mover
Affects Versions: 2.8.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou


This is used to track refactoring plan command.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 699 - Failure

2015-12-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7001 lines...]
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [07:01 min]
[INFO] Apache Hadoop HDFS  FAILURE [  04:11 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.060 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:18 h
[INFO] Finished at: 2015-12-16T19:08:08+00:00
[INFO] Final Memory: 61M/766M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-12192
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
3 tests failed.
FAILED:  
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.testUpgradeFromRel1BBWImage

Error Message:
Cannot obtain block length for 
LocatedBlock{BP-1470228770-67.195.81.148-1450280818905:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:55833,DS-aa19dc40-3f02-45cf-91ec-d7bd1aa16a22,DISK]]}

Stack Trace:
java.io.IOException: Cannot obtain block length for 
LocatedBlock{BP-1470228770-67.195.81.148-1450280818905:blk_7162739548153522810_1020;
 getBlockSize()=1024; corrupt=false; offset=0; 
locs=[DatanodeInfoWithStorage[127.0.0.1:55833,DS-aa19dc40-3f02-45cf-91ec-d7bd1aa16a22,DISK]]}
at 
org.apache.hadoop.hdfs.DFSInputStream.readBlockLength(DFSInputStream.java:399)
at 
org.apache.hadoop.hdfs.DFSInputStream.fetchLocatedBlocksAndGetLastBlockLength(DFSInputStream.java:343)
at 
org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:276)
at org.apache.hadoop.hdfs.DFSInputStream.(DFSInputStream.java:265)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1046)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1011)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.dfsOpenFileWithRetries(TestDFSUpgradeFromImage.java:177)
at 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage.verifyDir(TestDFSUpgradeFromImage.java:213)
at 

Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Tsuyoshi Ozawa
+1 (binding)

- verified checksums
- built Spark and Tez with -Dhadoop.version=2.6.3. About Tez, ran
tests and all of them passed.

Thanks,
- Tsuyoshi

On Thu, Dec 17, 2015 at 1:30 AM, Sangjin Lee  wrote:
> +1 (non-binding)
>
> - downloaded source and binary and verified the signatures (although I
> didn't connect with Junping via web of trust)
> - started a pseudo-distributed cluster and ran test jobs
> - browsed the RM and NN UI
> - looked through the daemon logs
>
> Thanks Junping.
>
> Sangjin
>
> On Wed, Dec 16, 2015 at 5:11 AM, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
>
>> +1 (non-binding)
>>
>> -- Downloaded both source and binary tarballs successfully.
>> --Set up a pseudo-distributed cluster and Distributed HA Cluster
>> --Ran Several jobs Slive,Terasort and pi.
>> --All are working fine.
>>
>>
>> Thanks & Regards
>>  Brahma Reddy Battula
>> 
>> From: Steve Loughran [ste...@hortonworks.com]
>> Sent: Wednesday, December 16, 2015 5:58 PM
>> To: mapreduce-...@hadoop.apache.org
>> Cc: Hadoop Common; hdfs-dev@hadoop.apache.org; yarn-...@hadoop.apache.org;
>> junping...@apache.org
>> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
>>
>> +1, binding
>>
>>
>> Did a build and test with slider with the build set to
>> -Dhadoop.version=2.6.3
>>
>> this does a D/L and test from the staging repo. All artifacts were
>> located, and the tests completed
>>
>> > On 12 Dec 2015, at 00:16, Junping Du  wrote:
>> >
>> >
>> > Hi all developers in hadoop community,
>> >   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next
>> maintenance release to follow up 2.6.2.) according to email thread of
>> release plan 2.6.3 [1]. Sorry for this RC coming a bit late as several
>> blocker issues were getting committed until yesterday. Below is the details:
>> >
>> > The RC is available for validation at:
>> > *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
>> > *
>> >
>> > The RC tag in git is: release-2.6.3-RC0
>> >
>> > The maven artifacts are staged via repository.apache.org at:
>> > *
>> https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
>> > <
>> https://repository.apache.org/content/repositories/orgapachehadoop-1025/>*
>> >
>> > You can find my public key at:
>> > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>> >
>> > Please try the release and vote. The vote will run for the usual 5 days.
>> >
>> > Thanks and happy weekend!
>> >
>> >
>> > Cheers,
>> >
>> > Junping
>> >
>> >
>> > [1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
>> >
>>
>>


[jira] [Resolved] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge resolved HDFS-9568.
--
Resolution: Duplicate

> Support NFSv4 interface to HDFS
> ---
>
> Key: HDFS-9568
> URL: https://issues.apache.org/jira/browse/HDFS-9568
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: nfs
>Reporter: John Zhuge
>Assignee: John Zhuge
>
> [HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
> interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
> addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
> Hadoop's distributed environment in addition to simplified configuration and 
> added security.
> This JIRA is to track NFSv4 support to access HDFS.
> We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9567) LlapServiceDriver can fail if only the packaged logger config is present

2015-12-16 Thread Sergey Shelukhin (JIRA)
Sergey Shelukhin created HDFS-9567:
--

 Summary: LlapServiceDriver can fail if only the packaged logger 
config is present
 Key: HDFS-9567
 URL: https://issues.apache.org/jira/browse/HDFS-9567
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Sergey Shelukhin


I was incrementally updating my setup on some VM and didn't have the logger 
config file, so the packaged one was picked up apparently, which caused this:
{noformat}
java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative path 
in absolute URI: 
jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
at org.apache.hadoop.fs.Path.initialize(Path.java:205)
at org.apache.hadoop.fs.Path.(Path.java:171)
at 
org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:234)
at 
org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:58)
Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
at java.net.URI.checkPath(URI.java:1823)
at java.net.URI.(URI.java:745)
at org.apache.hadoop.fs.Path.initialize(Path.java:202)
... 3 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-9567) LlapServiceDriver can fail if only the packaged logger config is present

2015-12-16 Thread Sergey Shelukhin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin resolved HDFS-9567.

Resolution: Invalid

Wrong project

> LlapServiceDriver can fail if only the packaged logger config is present
> 
>
> Key: HDFS-9567
> URL: https://issues.apache.org/jira/browse/HDFS-9567
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Sergey Shelukhin
>
> I was incrementally updating my setup on some VM and didn't have the logger 
> config file, so the packaged one was picked up apparently, which caused this:
> {noformat}
> java.lang.IllegalArgumentException: java.net.URISyntaxException: Relative 
> path in absolute URI: 
> jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
>   at org.apache.hadoop.fs.Path.initialize(Path.java:205)
>   at org.apache.hadoop.fs.Path.(Path.java:171)
>   at 
> org.apache.hadoop.hive.llap.cli.LlapServiceDriver.run(LlapServiceDriver.java:234)
>   at 
> org.apache.hadoop.hive.llap.cli.LlapServiceDriver.main(LlapServiceDriver.java:58)
> Caused by: java.net.URISyntaxException: Relative path in absolute URI: 
> jar:file:/home/vagrant/llap/apache-hive-2.0.0-SNAPSHOT-bin/lib/hive-llap-server-2.0.0-SNAPSHOT.jar!/llap-daemon-log4j2.properties
>   at java.net.URI.checkPath(URI.java:1823)
>   at java.net.URI.(URI.java:745)
>   at org.apache.hadoop.fs.Path.initialize(Path.java:202)
>   ... 3 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-9568) Support NFSv4 interface to HDFS

2015-12-16 Thread John Zhuge (JIRA)
John Zhuge created HDFS-9568:


 Summary: Support NFSv4 interface to HDFS
 Key: HDFS-9568
 URL: https://issues.apache.org/jira/browse/HDFS-9568
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Reporter: John Zhuge
Assignee: John Zhuge


[HDFS-4750|https://issues.apache.org/jira/browse/HDFS-4750] added NFSv3 
interface to HDFS. As NFSv4 client support in many OSes has matured, we can 
addd NFSv4 interface to HDFS. There are some NFSv4 features quite suitable in 
Hadoop's distributed environment in addition to simplified configuration and 
added security.
This JIRA is to track NFSv4 support to access HDFS.
We will upload the design doc and then the initial implementation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Vinod Kumar Vavilapalli
+1 (binding) for the RC except for my question below about the tag.

I checked 2.6.3-RC0, based on my check-list:

- Signatures and message digests all are good in general.
- The top level full LICENSE, NOTICE and README for the source artifacts are 
good - CHANGES.txt for common, hdfs and mapped are correctly located.
- Able to build the tars out of the source tar ball using JDK 7. (Don’t have a 
JDK 6 at hand)

Testing: All testing on single node, unsecured, default mode.

- Started HDFS daemons successfully , created directories.
- Successfully started YARN daemons - ResourceManager, NodeManager and Timeline 
Service
- Successfully started MapReduce history server.
- Ran DistributedShell as a native YARN app.
- Ran wordcount, pi, random writer, sort, grep and they all pass just fine.
- Navigated through the RM, NM and Timeline UIs to make sure the views are 
working well.
- Navigated through the MapReduce UI to make sure the views are working well.

Thanks,
+Vinod


> On Dec 16, 2015, at 11:32 AM, Vinod Kumar Vavilapalli  
> wrote:
> 
> So, the original voting mail mentions we are voting on release-2.6.3-RC0 tag.
> 
> Are we still doing that? What are the RC0.1 and RC1 tags doing then?
> 
> +Vinod
> 
>> On Dec 16, 2015, at 2:13 AM, Junping Du  wrote:
>> 
>> Thanks Akira for notice this. I don't think we can remove these tags as they 
>> should be immutable as branches. I created these duplicated tags as after I 
>> cut off RC0 out, some commit lands on 2.6.3 unexpected but I didn't realize 
>> I could still push to original tag by forcefully. The best thing I can do 
>> then is to make them point to the same commit as now it is.
>> 
>> Thanks,
>> 
>> Junping
>> 
>> From: Akira AJISAKA 
>> Sent: Wednesday, December 16, 2015 6:41 AM
>> To: common-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
>> mapreduce-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org
>> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
>> 
>> Thanks Junping for starting release process.
>> I noticed there are duplicated tags:
>> 
>> * release-2.6.3-RC0
>> * release-2.6.3-RC0.1
>> * release-2.6.3-RC1
>> 
>> Could you remove RC0.1 and RC1?
>> 
>> Regards,
>> Akira
>> 
>> On 12/16/15 10:17, yliu wrote:
>>> Thanks Junping, +1.
>>> Download the tarball and deploy a small HDFS/YARN cluster, and verify few
>>> basic functionalities.
>>> 
>>> Regards,
>>> Yi Liu
>>> 
>>> On Wed, Dec 16, 2015 at 6:42 AM, Chang Li  wrote:
>>> 
 Thanks Junping, + 1(non binding). Downloaded the tarball, compiled and
 built locally. Ran some MR jobs successfully.
 
 Best,
 Chang
 
 On Tue, Dec 15, 2015 at 3:17 PM, Wangda Tan  wrote:
 
> Thanks Junping,
> 
> +1 (binding). Deploy a cluster locally, run distributed shell and MR job,
> both successfully finished.
> 
> Regards,
> Wangda
> 
> 
> On Tue, Dec 15, 2015 at 12:43 PM, Naganarasimha Garla <
> naganarasimha...@gmail.com> wrote:
> 
>> Hi Junping,
>> 
>> +0 (non binding)
>> 
>> Though everything else is working  fine (downloaded the tar ball and
>> installed single node cluster setup and  verified  few MR jobs),
>> submission of Unmanaged AM is getting the RM down. YARN-4452 has
>> already been raised and i am working on it. Will provide the patch for
>> the trunk and the 2.6.3 version asap
>> 
>> Regards,
>> 
>> + Naga
>> 
>> 
>> 
>> Thanks for the work Junping! Downloaded the src tarball. Built locally
>> and successfully ran
>> in single node mode with a few map reduce jobs. LGTM.
>> 
>> Li Lu
>> 
>> On Dec 14, 2015, at 04:23, Junping Du
>> >
>> wrote:
>> 
>> ?Thanks Sarjeet and Tsuyoshi for reporting this. I just fix permission
>> issue and download
>> should work now. Please try to download it again. Thanks!
>> 
>> 
>> Thanks,
>> 
>> 
>> Junping
>> 
>> 
>> 
>> From: sarjeet singh >
>> Sent: Sunday, December 13, 2015 6:44 PM
>> To: common-...@hadoop.apache.org
>> Cc: mapreduce-...@hadoop.apache.org mapreduce-...@hadoop.apache.org
>>> ;
>> hdfs-dev@hadoop.apache.org;
>> yarn-...@hadoop.apache.org;
>> junping...@apache.org
>> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
>> 
>> I am also getting the same error when downloading tar.gz:
>> 
>> "You don't have permission to access
>> /~junping_du/hadoop-2.6.3-RC0/hadoop-2.6.3-RC0-src.tar.gz
>> on this server."
>> 

Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Jason Lowe
+1 (binding)
- Verified signatures and digests- Successfully built from source with native 
code support- Deployed to a single-node cluster and ran some test jobs
Jason

  From: Junping Du 
 To: Hadoop Common ; "hdfs-dev@hadoop.apache.org" 
; "mapreduce-...@hadoop.apache.org" 
; "yarn-...@hadoop.apache.org" 
 
Cc: "junping...@apache.org" 
 Sent: Friday, December 11, 2015 6:16 PM
 Subject: [VOTE] Release Apache Hadoop 2.6.3 RC0
   

Hi all developers in hadoop community,
  I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
maintenance release to follow up 2.6.2.) according to email thread of release 
plan 2.6.3 [1]. Sorry for this RC coming a bit late as several blocker issues 
were getting committed until yesterday. Below is the details:

The RC is available for validation at:
*http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
*

The RC tag in git is: release-2.6.3-RC0

The maven artifacts are staged via repository.apache.org at:
*https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
*

You can find my public key at:
http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS

Please try the release and vote. The vote will run for the usual 5 days.

Thanks and happy weekend!


Cheers,

Junping


[1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y


 

Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Mingliang Liu
+1 (non-binding)

1. Download the pre-built tar and check the validity
2. Configure and start a pseudo-distributed cluster
3. Run example grep MapReduce job locally
4. Operate the HDFS copying files from/to local directory
5. Check execution logs

All good. Thanks.

L

> On Dec 11, 2015, at 4:16 PM, Junping Du  wrote:
> 
> 
> Hi all developers in hadoop community,
>   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
> maintenance release to follow up 2.6.2.) according to email thread of release 
> plan 2.6.3 [1]. Sorry for this RC coming a bit late as several blocker issues 
> were getting committed until yesterday. Below is the details:
> 
> The RC is available for validation at:
> *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
> *
> 
> The RC tag in git is: release-2.6.3-RC0
> 
> The maven artifacts are staged via repository.apache.org at:
> *https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
> *
> 
> You can find my public key at:
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
> 
> Please try the release and vote. The vote will run for the usual 5 days.
> 
> Thanks and happy weekend!
> 
> 
> Cheers,
> 
> Junping
> 
> 
> [1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
> 



Re: [VOTE] Release Apache Hadoop 2.7.2 RC0

2015-12-16 Thread Vinod Kumar Vavilapalli
The last of the blockers went in late last week.

Re-spinning the RC now.

Thanks
+Vinod

> On Nov 13, 2015, at 10:26 AM, Vinod Kumar Vavilapalli  
> wrote:
> 
> Thanks for reporting this Jason!
> 
> Everyone, I am canceling this RC given the feedback, we will go again after 
> addressing the open issues.
> 
> Thanks
> +Vinod
> 
>> On Nov 13, 2015, at 7:57 AM, Jason Lowe > > wrote:
>> 
>> -1 (binding)
>> 
>> Ran into public localization issues and filed YARN-4354 
>> . We need that resolved 
>> before the release is ready.  We will either need a timely fix or may have 
>> to revert YARN-2902 to unblock the release if my root-cause analysis is 
>> correct.  I'll dig into this more today.
>> 
>> Jason
>> 
>> From: Vinod Kumar Vavilapalli > >
>> To: common-...@hadoop.apache.org ; 
>> hdfs-dev@hadoop.apache.org ; 
>> yarn-...@hadoop.apache.org ; 
>> mapreduce-...@hadoop.apache.org  
>> Cc: vino...@apache.org  
>> Sent: Wednesday, November 11, 2015 10:31 PM
>> Subject: [VOTE] Release Apache Hadoop 2.7.2 RC0
>> 
>> Hi all,
>> 
>> 
>> I've created a release candidate RC0 for Apache Hadoop 2.7.2.
>> 
>> 
>> As discussed before, this is the next maintenance release to follow up
>> 2.7.1.
>> 
>> 
>> The RC is available for validation at:
>> 
>> *http://people.apache.org/~vinodkv/hadoop-2.7.2-RC0/ 
>> 
>> 
>> > >*
>> 
>> 
>> The RC tag in git is: release-2.7.2-RC0
>> 
>> 
>> The maven artifacts are available via repository.apache.org 
>>  at
>> 
>> *https://repository.apache.org/content/repositories/orgapachehadoop-1023/ 
>> 
>> 
>> > >*
>> 
>> 
>> As you may have noted, an unusually long 2.6.3 release caused 2.7.2 to slip
>> by quite a bit. This release's related discussion threads are linked below:
>> [1] and [2].
>> 
>> 
>> Please try the release and vote; the vote will run for the usual 5 days.
>> 
>> 
>> Thanks,
>> 
>> Vinod
>> 
>> 
>> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes 
>> 
>> 
>> [2]: Planning Apache Hadoop 2.7.2
>> http://markmail.org/message/iktqss2qdeykgpqk 
>> 
>> 
>> 
> 



Hadoop-Hdfs-trunk-Java8 - Build # 700 - Still Failing

2015-12-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/700/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 10481 lines...]
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [10:30 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:11 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.153 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:22 h
[INFO] Finished at: 2015-12-16T23:41:16+00:00
[INFO] Final Memory: 69M/1169M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: 
org.apache.maven.surefire.booter.SurefireBooterForkException: Error occurred in 
starting fork, check output in log -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
No tests ran.

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #700

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-4452. NPE when submit Unmanaged application. Contributed by

[cnauroth] HDFS-9557. Reduce object allocation in PB conversion. Contributed by

[sseth] YARN-4207. Add a non-judgemental YARN app completion status. Contributed

--
[...truncated 10289 lines...]
at 
org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:291)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:126)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:822)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:675)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:863)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1565)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestAuditLogs.setupCluster(TestAuditLogs.java:121)

testAuditDenied[1](org.apache.hadoop.hdfs.server.namenode.TestAuditLogs)  Time 
elapsed: 2.124 sec  <<< ERROR!
java.lang.NullPointerException: null
at org.apache.hadoop.hdfs.DFSTestUtil.cleanup(DFSTestUtil.java:776)
at 
org.apache.hadoop.hdfs.server.namenode.TestAuditLogs.teardownCluster(TestAuditLogs.java:139)

testAuditWebHdfsOpen[1](org.apache.hadoop.hdfs.server.namenode.TestAuditLogs)  
Time elapsed: 0.145 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1116)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1080)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:159)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1072)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:370)
at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:228)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1005)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.TestAuditLogs.setupCluster(TestAuditLogs.java:121)

testAuditWebHdfsOpen[1](org.apache.hadoop.hdfs.server.namenode.TestAuditLogs)  
Time elapsed: 0.145 sec  <<< ERROR!
java.lang.NullPointerException: null
at org.apache.hadoop.hdfs.DFSTestUtil.cleanup(DFSTestUtil.java:776)
at 
org.apache.hadoop.hdfs.server.namenode.TestAuditLogs.teardownCluster(TestAuditLogs.java:139)

testAuditWebHdfsStat[1](org.apache.hadoop.hdfs.server.namenode.TestAuditLogs)  
Time elapsed: 0.122 sec  <<< ERROR!
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:713)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1116)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAllDirs(FSImage.java:1080)
at 
org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:159)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1072)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:370)
at 
org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:228)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1005)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:891)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:823)
at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:482)
at 
org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:441)
 

[jira] [Created] (HDFS-9566) Remove expensive getStorages method

2015-12-16 Thread Daryn Sharp (JIRA)
Daryn Sharp created HDFS-9566:
-

 Summary: Remove expensive getStorages method
 Key: HDFS-9566
 URL: https://issues.apache.org/jira/browse/HDFS-9566
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0, 2.8.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp


HDFS-5318 added a {{BlocksMap#getStorages(Block, State)}} which is based on 
iterables and predicates.  The method is very expensive compared to a simple 
comparison/continue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.2 RC0

2015-12-16 Thread Vinod Kumar Vavilapalli
Seeing as this came up very late, I am leaning towards dropping this off from 
2.7.2.

That said, I don’t see any reason why this shouldn’t be in 2.8.0 and 2.7.3. Set 
the target-versions accordingly on JIRA. If you agree, appreciate backport help 
to those branches.

Thanks
+Vinod

> On Dec 15, 2015, at 12:59 AM, Konstantin Shvachko  
> wrote:
> 
> Sorry for bringing this up late.
> I think we should pick up HDFS-9516 for this release.
> Rather critical bug fix, but up to you, Vinod.
> 
> Thanks,
> --Konst
> 
> On Wed, Nov 11, 2015 at 8:31 PM, Vinod Kumar Vavilapalli > wrote:
> 
>> Hi all,
>> 
>> 
>> I've created a release candidate RC0 for Apache Hadoop 2.7.2.
>> 
>> 
>> As discussed before, this is the next maintenance release to follow up
>> 2.7.1.
>> 
>> 
>> The RC is available for validation at:
>> 
>> *http://people.apache.org/~vinodkv/hadoop-2.7.2-RC0/
>> 
>> *
>> 
>> 
>> The RC tag in git is: release-2.7.2-RC0
>> 
>> 
>> The maven artifacts are available via repository.apache.org at
>> 
>> *https://repository.apache.org/content/repositories/orgapachehadoop-1023/
>> 
>> >> *
>> 
>> 
>> As you may have noted, an unusually long 2.6.3 release caused 2.7.2 to slip
>> by quite a bit. This release's related discussion threads are linked below:
>> [1] and [2].
>> 
>> 
>> Please try the release and vote; the vote will run for the usual 5 days.
>> 
>> 
>> Thanks,
>> 
>> Vinod
>> 
>> 
>> [1]: 2.7.2 release plan: http://markmail.org/message/oozq3gvd4nhzsaes
>> 
>> [2]: Planning Apache Hadoop 2.7.2
>> http://markmail.org/message/iktqss2qdeykgpqk
>> 



[jira] [Resolved] (HDFS-9551) Random VolumeChoosingPolicy

2015-12-16 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-9551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-9551.
---
Resolution: Won't Fix

Based on discussion here, I don't want to add the maintenance burden of a new 
volume policy unless there are some demonstrated benefits. Thanks for the 
interest though!

> Random VolumeChoosingPolicy
> ---
>
> Key: HDFS-9551
> URL: https://issues.apache.org/jira/browse/HDFS-9551
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Affects Versions: 2.7.1
>Reporter: BELUGA BEHR
>Priority: Minor
> Attachments: RandomVolumeChoosingPolicy.java, 
> TestRandomVolumeChoosingPolicy.java
>
>
> Please find attached a new implementation of VolumeChoosingPolicy.  This 
> implementation chooses volumes at random to place blocks.  It is thread-safe 
> and un-synchronized so there is less thread contention.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Xuan Gong
+1 (binding),

Build and deploy the cluster from source code.
Ran a few example jobs and passed successfully. 

Xuan Gong

> On Dec 16, 2015, at 4:07 PM, Arpit Agarwal  wrote:
> 
> +1 (binding)
> 
> - Verified signatures for source and binary distributions
> - Built jars from source with java 1.7.0_79
> - Deployed single-node pseudo-cluster
> - Ran example map reduce jobs
> - Ran hdfs admin commands, verified NN web UI shows expected usages
> 
> 
> 
> On 12/11/15, 4:16 PM, "Junping Du"  wrote:
> 
>> 
>> Hi all developers in hadoop community,
>>  I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
>> maintenance release to follow up 2.6.2.) according to email thread of 
>> release plan 2.6.3 [1]. Sorry for this RC coming a bit late as several 
>> blocker issues were getting committed until yesterday. Below is the details:
>> 
>> The RC is available for validation at:
>> *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
>> *
>> 
>> The RC tag in git is: release-2.6.3-RC0
>> 
>> The maven artifacts are staged via repository.apache.org at:
>> *https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
>> *
>> 
>> You can find my public key at:
>> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>> 
>> Please try the release and vote. The vote will run for the usual 5 days.
>> 
>> Thanks and happy weekend!
>> 
>> 
>> Cheers,
>> 
>> Junping
>> 
>> 
>> [1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
>> 



Build failed in Jenkins: Hadoop-Hdfs-trunk #2635

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[wangda] YARN-4293. ResourceUtilization should be a part of yarn node CLI. 
(Sunil

[wangda] YARN-4225. Add preemption status to yarn queue -status for capacity

[wangda] YARN-4416. Deadlock due to synchronised get Methods in AbstractCSQueue.

[vinodkv] HADOOP-12415. Fixed pom files to correctly include compile-time

[vinodkv] Revert "MAPREDUCE-6566. Add retry support to mapreduce CLI tool.

[jlowe] YARN-4461. Redundant nodeLocalityDelay log in LeafQueue. Contributed by

[xgong] MAPREDUCE-6566. Add retry support to mapreduce CLI tool. Contributed by

[wang] HDFS-9300. TestDirectoryScanner.testThrottle() is still a little flakey.

--
[...truncated 6174 lines...]
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.929 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestDFSOutputStream
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.497 sec - in 
org.apache.hadoop.hdfs.TestDFSOutputStream
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.647 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.306 sec - in 
org.apache.hadoop.hdfs.TestSafeModeWithStripedFile
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.971 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090
Running org.apache.hadoop.hdfs.TestSeekBug
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.401 sec - in 
org.apache.hadoop.hdfs.TestSeekBug
Running org.apache.hadoop.hdfs.TestDatanodeReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.727 sec - in 
org.apache.hadoop.hdfs.TestDatanodeReport
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.649 sec - 
in org.apache.hadoop.hdfs.TestDistributedFileSystem
Running org.apache.hadoop.hdfs.security.TestDelegationToken
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.128 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationToken
Running org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.986 sec - in 
org.apache.hadoop.hdfs.security.token.block.TestBlockToken
Running org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.241 sec - in 
org.apache.hadoop.hdfs.security.TestDelegationTokenForProxyUser
Running org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.831 sec - in 
org.apache.hadoop.hdfs.security.TestClientProtocolWithDelegationToken
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.306 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.409 sec - 
in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.536 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.603 sec - in 
org.apache.hadoop.hdfs.TestErasureCodingPolicyWithSnapshot
Running org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.091 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitRead
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.762 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070
Running org.apache.hadoop.hdfs.TestAclsEndToEnd
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.844 sec - in 
org.apache.hadoop.hdfs.TestAclsEndToEnd
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.914 sec - 
in org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020
Running org.apache.hadoop.hdfs.TestFileStatus
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.83 sec - in 
org.apache.hadoop.hdfs.TestFileStatus
Running org.apache.hadoop.hdfs.TestDFSAddressConfig
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.981 sec - in 
org.apache.hadoop.hdfs.TestDFSAddressConfig
Running 

Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Arpit Agarwal
+1 (binding)

- Verified signatures for source and binary distributions
- Built jars from source with java 1.7.0_79
- Deployed single-node pseudo-cluster
- Ran example map reduce jobs
- Ran hdfs admin commands, verified NN web UI shows expected usages



On 12/11/15, 4:16 PM, "Junping Du"  wrote:

>
>Hi all developers in hadoop community,
>   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
> maintenance release to follow up 2.6.2.) according to email thread of release 
> plan 2.6.3 [1]. Sorry for this RC coming a bit late as several blocker issues 
> were getting committed until yesterday. Below is the details:
>
>The RC is available for validation at:
>*http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
>*
>
>The RC tag in git is: release-2.6.3-RC0
>
>The maven artifacts are staged via repository.apache.org at:
>*https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
>*
>
>You can find my public key at:
>http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>
>Please try the release and vote. The vote will run for the usual 5 days.
>
>Thanks and happy weekend!
>
>
>Cheers,
>
>Junping
>
>
>[1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
>


TestDirectoryScanner.testThrottle() Failures

2015-12-16 Thread Daniel Templeton
Would someone please review and commit HDFS-9300 so that the 
testThrottle() test will stop failing.  It's a 2-line patch.


Thanks,
Daniel


Hadoop-Hdfs-trunk-Java8 - Build # 701 - Still Failing

2015-12-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/701/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7029 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [05:07 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:33 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.065 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:38 h
[INFO] Finished at: 2015-12-17T03:52:02+00:00
[INFO] Final Memory: 56M/674M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
5 tests failed.
FAILED:  org.apache.hadoop.hdfs.TestDecommission.testDecommissionOnStandby

Error Message:
org/apache/hadoop/ha/HAServiceProtocol$StateChangeRequestInfo

Stack Trace:
java.lang.NoClassDefFoundError: 
org/apache/hadoop/ha/HAServiceProtocol$StateChangeRequestInfo
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at 
org.apache.hadoop.hdfs.MiniDFSCluster.transitionToActive(MiniDFSCluster.java:2398)
at 
org.apache.hadoop.hdfs.TestDecommission.testDecommissionOnStandby(TestDecommission.java:468)


FAILED:  
org.apache.hadoop.hdfs.TestDecommission.testDecommissionWithNamenodeRestart

Error Message:
org/apache/hadoop/io/retry/Idempotent

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/io/retry/Idempotent
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at 

Hadoop-Hdfs-trunk - Build # 2634 - Still Failing

2015-12-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2634/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6357 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:54 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:16 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.090 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:20 h
[INFO] Finished at: 2015-12-17T00:11:33+00:00
[INFO] Final Memory: 57M/742M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling

Error Message:
Throttle is too permissive

Stack Trace:
java.lang.AssertionError: Throttle is too permissive
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner.testThrottling(TestDirectoryScanner.java:613)




Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Junping Du
Hey Vinod,
Yes. We are voting for RC0 tag and all related build bits are related 
to RC0 only. 
Shall we do something special to RC0.1 or RC1 tag (they are just 
duplicated with RC0, pointing to the same commit as I mentioned below)? I guess 
no as we will have final tag for this release after vote ends. Isn't it?

Thanks,

Junping

From: Vinod Kumar Vavilapalli 
Sent: Wednesday, December 16, 2015 7:32 PM
To: common-...@hadoop.apache.org
Cc: yarn-...@hadoop.apache.org; mapreduce-...@hadoop.apache.org; 
hdfs-dev@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

So, the original voting mail mentions we are voting on release-2.6.3-RC0 tag.

Are we still doing that? What are the RC0.1 and RC1 tags doing then?

+Vinod

> On Dec 16, 2015, at 2:13 AM, Junping Du  wrote:
>
> Thanks Akira for notice this. I don't think we can remove these tags as they 
> should be immutable as branches. I created these duplicated tags as after I 
> cut off RC0 out, some commit lands on 2.6.3 unexpected but I didn't realize I 
> could still push to original tag by forcefully. The best thing I can do then 
> is to make them point to the same commit as now it is.
>
> Thanks,
>
> Junping
> 
> From: Akira AJISAKA 
> Sent: Wednesday, December 16, 2015 6:41 AM
> To: common-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
> mapreduce-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
>
> Thanks Junping for starting release process.
> I noticed there are duplicated tags:
>
> * release-2.6.3-RC0
> * release-2.6.3-RC0.1
> * release-2.6.3-RC1
>
> Could you remove RC0.1 and RC1?
>
> Regards,
> Akira
>
> On 12/16/15 10:17, yliu wrote:
>> Thanks Junping, +1.
>> Download the tarball and deploy a small HDFS/YARN cluster, and verify few
>> basic functionalities.
>>
>> Regards,
>> Yi Liu
>>
>> On Wed, Dec 16, 2015 at 6:42 AM, Chang Li  wrote:
>>
>>> Thanks Junping, + 1(non binding). Downloaded the tarball, compiled and
>>> built locally. Ran some MR jobs successfully.
>>>
>>> Best,
>>> Chang
>>>
>>> On Tue, Dec 15, 2015 at 3:17 PM, Wangda Tan  wrote:
>>>
 Thanks Junping,

 +1 (binding). Deploy a cluster locally, run distributed shell and MR job,
 both successfully finished.

 Regards,
 Wangda


 On Tue, Dec 15, 2015 at 12:43 PM, Naganarasimha Garla <
 naganarasimha...@gmail.com> wrote:

> Hi Junping,
>
> +0 (non binding)
>
> Though everything else is working  fine (downloaded the tar ball and
> installed single node cluster setup and  verified  few MR jobs),
> submission of Unmanaged AM is getting the RM down. YARN-4452 has
> already been raised and i am working on it. Will provide the patch for
> the trunk and the 2.6.3 version asap
>
> Regards,
>
> + Naga
>
> 
>
> Thanks for the work Junping! Downloaded the src tarball. Built locally
> and successfully ran
> in single node mode with a few map reduce jobs. LGTM.
>
> Li Lu
>
> On Dec 14, 2015, at 04:23, Junping Du
> >
> wrote:
>
> ?Thanks Sarjeet and Tsuyoshi for reporting this. I just fix permission
> issue and download
> should work now. Please try to download it again. Thanks!
>
>
> Thanks,
>
>
> Junping
>
>
> 
> From: sarjeet singh >
> Sent: Sunday, December 13, 2015 6:44 PM
> To: common-...@hadoop.apache.org
> Cc: mapreduce-...@hadoop.apache.org>>> mapreduce-...@hadoop.apache.org
>> ;
> hdfs-dev@hadoop.apache.org;
> yarn-...@hadoop.apache.org;
> junping...@apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
>
> I am also getting the same error when downloading tar.gz:
>
> "You don't have permission to access
> /~junping_du/hadoop-2.6.3-RC0/hadoop-2.6.3-RC0-src.tar.gz
> on this server."
>
> - Sarjeet Singh
>
> On Sat, Dec 12, 2015 at 4:17 PM, Tsuyoshi Ozawa
> >
> wrote:
> Hi Junping,
>
> Thank you for starting the voting.
> I cannot access the tar.gz file because of permission error. Could you
> check the permission to access the files?
>
> Forbidden
> You don't have permission to access
> /~junping_du/hadoop-2.6.3-RC0/hadoop-2.6.3-RC0-src.tar.gz
> on this server.
>
> Thanks,
> - Tsuyoshi
>
> On Sat, Dec 12, 

Build failed in Jenkins: Hadoop-Hdfs-trunk #2634

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[junping_du] YARN-4452. NPE when submit Unmanaged application. Contributed by

[cnauroth] HDFS-9557. Reduce object allocation in PB conversion. Contributed by

[sseth] YARN-4207. Add a non-judgemental YARN app completion status. Contributed

--
[...truncated 6164 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.444 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.359 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.204 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.762 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.754 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.096 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.516 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.436 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.667 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.505 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.309 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.366 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.467 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.676 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.696 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.392 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 82.31 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.304 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.021 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.927 sec - 
in org.apache.hadoop.hdfs.TestBlockReaderLocal
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.347 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.111 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.991 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.806 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running 

Re: TestDirectoryScanner.testThrottle() Failures

2015-12-16 Thread Andrew Wang
Done

On Wed, Dec 16, 2015 at 4:17 PM, Daniel Templeton 
wrote:

> Would someone please review and commit HDFS-9300 so that the
> testThrottle() test will stop failing.  It's a 2-line patch.
>
> Thanks,
> Daniel
>


Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Steve Loughran

+1, binding


Did a build and test with slider with the build set to -Dhadoop.version=2.6.3

this does a D/L and test from the staging repo. All artifacts were located, and 
the tests completed

> On 12 Dec 2015, at 00:16, Junping Du  wrote:
> 
> 
> Hi all developers in hadoop community,
>   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
> maintenance release to follow up 2.6.2.) according to email thread of release 
> plan 2.6.3 [1]. Sorry for this RC coming a bit late as several blocker issues 
> were getting committed until yesterday. Below is the details:
> 
> The RC is available for validation at:
> *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
> *
> 
> The RC tag in git is: release-2.6.3-RC0
> 
> The maven artifacts are staged via repository.apache.org at:
> *https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
> *
> 
> You can find my public key at:
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
> 
> Please try the release and vote. The vote will run for the usual 5 days.
> 
> Thanks and happy weekend!
> 
> 
> Cheers,
> 
> Junping
> 
> 
> [1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
> 



RE: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Brahma Reddy Battula
+1 (non-binding)

-- Downloaded both source and binary tarballs successfully.
--Set up a pseudo-distributed cluster and Distributed HA Cluster
--Ran Several jobs Slive,Terasort and pi.
--All are working fine.


Thanks & Regards
 Brahma Reddy Battula

From: Steve Loughran [ste...@hortonworks.com]
Sent: Wednesday, December 16, 2015 5:58 PM
To: mapreduce-...@hadoop.apache.org
Cc: Hadoop Common; hdfs-dev@hadoop.apache.org; yarn-...@hadoop.apache.org; 
junping...@apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

+1, binding


Did a build and test with slider with the build set to -Dhadoop.version=2.6.3

this does a D/L and test from the staging repo. All artifacts were located, and 
the tests completed

> On 12 Dec 2015, at 00:16, Junping Du  wrote:
>
>
> Hi all developers in hadoop community,
>   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next 
> maintenance release to follow up 2.6.2.) according to email thread of release 
> plan 2.6.3 [1]. Sorry for this RC coming a bit late as several blocker issues 
> were getting committed until yesterday. Below is the details:
>
> The RC is available for validation at:
> *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
> *
>
> The RC tag in git is: release-2.6.3-RC0
>
> The maven artifacts are staged via repository.apache.org at:
> *https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
> *
>
> You can find my public key at:
> http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
>
> Please try the release and vote. The vote will run for the usual 5 days.
>
> Thanks and happy weekend!
>
>
> Cheers,
>
> Junping
>
>
> [1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
>



[jira] [Created] (HDFS-9559) Add haadmin command to get HA state of all the namenodes

2015-12-16 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-9559:


 Summary: Add haadmin command to get HA state of all the namenodes
 Key: HDFS-9559
 URL: https://issues.apache.org/jira/browse/HDFS-9559
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: tools
Affects Versions: 2.7.1
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


Currently we have one command to get start of namenode.

{code}
./hdfs haadmin -getServiceState 
{code}

It will be good to have command which will give state of all the namenodes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Steve Loughran

> On 16 Dec 2015, at 06:41, Akira AJISAKA  wrote:
> 
> Thanks Junping for starting release process.
> I noticed there are duplicated tags:
> 
> * release-2.6.3-RC0
> * release-2.6.3-RC0.1
> * release-2.6.3-RC1
> 
> Could you remove RC0.1 and RC1?
> 
> Regards,
> Akira


the ASF has locked down tag & branch movement for now, so that you are 
guaranteed that the tag included in a vote is the tag you get when you download 
things.

The numbering suggestion was mine, I'm afraid


[jira] [Created] (HDFS-9560) Fair AvailableSpaceVolumeChoosingPolicy

2015-12-16 Thread BELUGA BEHR (JIRA)
BELUGA BEHR created HDFS-9560:
-

 Summary: Fair AvailableSpaceVolumeChoosingPolicy
 Key: HDFS-9560
 URL: https://issues.apache.org/jira/browse/HDFS-9560
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: BELUGA BEHR
Priority: Minor


I took a look at AvailableSpaceVolumeChoosingPolicy.  It seems a bit overkill 
and includes some configuration items that seem a bit arbitrary with no real 
clear guidance on how to effectively use them:

_dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction_
_dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold_

I have created an alternative implementation that does not require any external 
configuration, is thread-safe, and requires no synchronization.

"Weighted Randomized Ordering"

http://stackoverflow.com/questions/23971365/weighted-randomized-ordering

Conceptually, a dart-board is constructed of several wedges, each wedge 
represents a disk volume.  The more available space that a volume has relative 
to the other volumes, the larger its wedge.  Then, a dart is thrown at the 
board and whichever wedge(volume) the dart lands on, that wedge is assigned the 
incoming block.

Over time, the wedges balance and all have an equal chance of being "hit."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Sangjin Lee
+1 (non-binding)

- downloaded source and binary and verified the signatures (although I
didn't connect with Junping via web of trust)
- started a pseudo-distributed cluster and ran test jobs
- browsed the RM and NN UI
- looked through the daemon logs

Thanks Junping.

Sangjin

On Wed, Dec 16, 2015 at 5:11 AM, Brahma Reddy Battula <
brahmareddy.batt...@huawei.com> wrote:

> +1 (non-binding)
>
> -- Downloaded both source and binary tarballs successfully.
> --Set up a pseudo-distributed cluster and Distributed HA Cluster
> --Ran Several jobs Slive,Terasort and pi.
> --All are working fine.
>
>
> Thanks & Regards
>  Brahma Reddy Battula
> 
> From: Steve Loughran [ste...@hortonworks.com]
> Sent: Wednesday, December 16, 2015 5:58 PM
> To: mapreduce-...@hadoop.apache.org
> Cc: Hadoop Common; hdfs-dev@hadoop.apache.org; yarn-...@hadoop.apache.org;
> junping...@apache.org
> Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0
>
> +1, binding
>
>
> Did a build and test with slider with the build set to
> -Dhadoop.version=2.6.3
>
> this does a D/L and test from the staging repo. All artifacts were
> located, and the tests completed
>
> > On 12 Dec 2015, at 00:16, Junping Du  wrote:
> >
> >
> > Hi all developers in hadoop community,
> >   I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the next
> maintenance release to follow up 2.6.2.) according to email thread of
> release plan 2.6.3 [1]. Sorry for this RC coming a bit late as several
> blocker issues were getting committed until yesterday. Below is the details:
> >
> > The RC is available for validation at:
> > *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
> > *
> >
> > The RC tag in git is: release-2.6.3-RC0
> >
> > The maven artifacts are staged via repository.apache.org at:
> > *
> https://repository.apache.org/content/repositories/orgapachehadoop-1025/?
> > <
> https://repository.apache.org/content/repositories/orgapachehadoop-1025/>*
> >
> > You can find my public key at:
> > http://svn.apache.org/repos/asf/hadoop/common/dist/KEYS
> >
> > Please try the release and vote. The vote will run for the usual 5 days.
> >
> > Thanks and happy weekend!
> >
> >
> > Cheers,
> >
> > Junping
> >
> >
> > [1]: 2.6.3 release plan: http://markmail.org/thread/nc2jogbgni37vu6y
> >
>
>


[jira] [Created] (HDFS-9561) PIpieline recovery near the end of a block may fail

2015-12-16 Thread Kihwal Lee (JIRA)
Kihwal Lee created HDFS-9561:


 Summary: PIpieline recovery near the end of a block may fail
 Key: HDFS-9561
 URL: https://issues.apache.org/jira/browse/HDFS-9561
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee


When the client wants to add additional nodes to the pipeline during a 
recovery, it will fail if all existing replicas are already finalized.  This is 
because the partial block copy only works when the replica is in rbw.  Clients 
cannot reliably tell whether a node has finalized the replica during a recovery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-1595) DFSClient may incorrectly detect datanode failure

2015-12-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-1595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-1595.
--
   Resolution: Duplicate
Fix Version/s: HDFS-9178

> DFSClient may incorrectly detect datanode failure
> -
>
> Key: HDFS-1595
> URL: https://issues.apache.org/jira/browse/HDFS-1595
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode, hdfs-client
>Reporter: Tsz Wo Nicholas Sze
>Priority: Critical
> Fix For: HDFS-9178
>
> Attachments: hdfs-1595-idea.txt
>
>
> Suppose a source datanode S is writing to a destination datanode D in a write 
> pipeline.  We have an implicit assumption that _if S catches an exception 
> when it is writing to D, then D is faulty and S is fine._  As a result, 
> DFSClient will take out D from the pipeline, reconstruct the write pipeline 
> with the remaining datanodes and then continue writing .
> However, we find a case that the faulty machine F is indeed S but not D.  In 
> the case we found, F has a faulty network interface (or a faulty switch port) 
> in such a way that the faulty network interface works fine when transferring 
> a small amount of data, say 1MB, but it often fails when transferring a large 
> amount of data, say 100MB.
> It is even worst if F is the first datanode in the pipeline.  Consider the 
> following:
> # DFSClient creates a pipeline with three datanodes.  The first datanode is F.
> # F catches an IOException when writing to the second datanode. Then, F 
> reports the second datanode has error.
> # DFSClient removes the second datanode from the pipeline and continue 
> writing with the remaining datanode(s).
> # The pipeline now has two datanodes but (2) and (3) repeat.
> # Now, only F remains in the pipeline.  DFSClient continues writing with one 
> replica in F.
> # The write succeeds and DFSClient is able to *close the file successfully*.
> # The block is under replicated.  The NameNode schedules replication from F 
> to some other datanode D.
> # The replication fails for the same reason.  D reports to the NameNode that 
> the replica in F is corrupted.
> # The NameNode marks the replica in F is corrupted.
> # The block is corrupted since no replica is available.
> We were able to manually divide the replicas into small files and copy them 
> out from F without fixing the hardware.  The replicas seems uncorrupted.  
> This is a *data availability problem*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8011) standby nn can't started

2015-12-16 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee resolved HDFS-8011.
--
Resolution: Cannot Reproduce

It is very likely fixed in the later releases when we fixed similar issues.

> standby nn can't started
> 
>
> Key: HDFS-8011
> URL: https://issues.apache.org/jira/browse/HDFS-8011
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.3.0
> Environment: centeros 6.2  64bit 
>Reporter: fujie
>
> We have seen crash when starting the standby namenode, with fatal errors. Any 
> solutions, workarouds, or ideas would be helpful for us.
> 1. Here is the context: 
>   At begining we have 2 namenodes, take A as active and B as standby. For 
> some resons, namenode A was dead, so namenode B is working as active.
>   When we try to restart A after a minute, it can't work. During this 
> time a lot of files were put to HDFS, and a lot of files were renamed. 
>   Nodenode A crashed when "awaiting reported blocks in safemode" each 
> time.
>  
> 2. We can see error log below:
>   1)2015-03-30  ERROR 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: Encountered exception 
> on operation CloseOp [length=0, inodeId=0, 
> path=/xxx/_temporary/xxx/part-r-00074.bz2, replication=3, 
> mtime=1427699913947, atime=1427699081161, blockSize=268435456, 
> blocks=[blk_2103131025_1100889495739], permissions=dm:dm:rw-r--r--, 
> clientName=, clientMachine=, opCode=OP_CLOSE, txid=7632753612]
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction.setGenerationStampAndVerifyReplicas(BlockInfoUnderConstruction.java:247)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockInfoUnderConstruction.commitBlock(BlockInfoUnderConstruction.java:267)
> at 
> org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.forceCompleteBlock(BlockManager.java:639)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.updateBlocks(FSEditLogLoader.java:813)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.applyEditLogOp(FSEditLogLoader.java:383)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:209)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:122)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:737)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:227)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:321)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$0(EditLogTailer.java:302)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:356)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
> at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:413)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:292)
> 
>2)2015-03-30  FATAL 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Unknown error 
> encountered while tailing edits. Shutting down standby N
> N.
> java.io.IOException: Failed to apply edit log operation AddBlockOp 
> [path=/xxx/_temporary/xxx/part-m-00121, 
> penultimateBlock=blk_2102331803_1100888911441, 
> lastBlock=blk_2102661068_1100889009168, RpcClientId=, RpcCallId=-2]: error
> null
> at 
> org.apache.hadoop.hdfs.server.namenode.MetaRecoveryContext.editLogLoaderPrompt(MetaRecoveryContext.java:94)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:215)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:122)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:737)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:227)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:321)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$0(EditLogTailer.java:302)
> at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:296)
> at java.security.AccessController.doPrivileged(Native Method)
> 

Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

2015-12-16 Thread Junping Du
Thanks Akira for notice this. I don't think we can remove these tags as they 
should be immutable as branches. I created these duplicated tags as after I cut 
off RC0 out, some commit lands on 2.6.3 unexpected but I didn't realize I could 
still push to original tag by forcefully. The best thing I can do then is to 
make them point to the same commit as now it is.

Thanks,

Junping

From: Akira AJISAKA 
Sent: Wednesday, December 16, 2015 6:41 AM
To: common-...@hadoop.apache.org; yarn-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org; hdfs-dev@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

Thanks Junping for starting release process.
I noticed there are duplicated tags:

* release-2.6.3-RC0
* release-2.6.3-RC0.1
* release-2.6.3-RC1

Could you remove RC0.1 and RC1?

Regards,
Akira

On 12/16/15 10:17, yliu wrote:
> Thanks Junping, +1.
> Download the tarball and deploy a small HDFS/YARN cluster, and verify few
> basic functionalities.
>
> Regards,
> Yi Liu
>
> On Wed, Dec 16, 2015 at 6:42 AM, Chang Li  wrote:
>
>> Thanks Junping, + 1(non binding). Downloaded the tarball, compiled and
>> built locally. Ran some MR jobs successfully.
>>
>> Best,
>> Chang
>>
>> On Tue, Dec 15, 2015 at 3:17 PM, Wangda Tan  wrote:
>>
>>> Thanks Junping,
>>>
>>> +1 (binding). Deploy a cluster locally, run distributed shell and MR job,
>>> both successfully finished.
>>>
>>> Regards,
>>> Wangda
>>>
>>>
>>> On Tue, Dec 15, 2015 at 12:43 PM, Naganarasimha Garla <
>>> naganarasimha...@gmail.com> wrote:
>>>
 Hi Junping,

 +0 (non binding)

 Though everything else is working  fine (downloaded the tar ball and
 installed single node cluster setup and  verified  few MR jobs),
 submission of Unmanaged AM is getting the RM down. YARN-4452 has
 already been raised and i am working on it. Will provide the patch for
 the trunk and the 2.6.3 version asap

 Regards,

 + Naga

 

 Thanks for the work Junping! Downloaded the src tarball. Built locally
 and successfully ran
 in single node mode with a few map reduce jobs. LGTM.

 Li Lu

 On Dec 14, 2015, at 04:23, Junping Du
 >
 wrote:

 ?Thanks Sarjeet and Tsuyoshi for reporting this. I just fix permission
 issue and download
 should work now. Please try to download it again. Thanks!


 Thanks,


 Junping


 
 From: sarjeet singh >
 Sent: Sunday, December 13, 2015 6:44 PM
 To: common-...@hadoop.apache.org
 Cc: mapreduce-...@hadoop.apache.org>> mapreduce-...@hadoop.apache.org
> ;
 hdfs-dev@hadoop.apache.org;
 yarn-...@hadoop.apache.org;
 junping...@apache.org
 Subject: Re: [VOTE] Release Apache Hadoop 2.6.3 RC0

 I am also getting the same error when downloading tar.gz:

 "You don't have permission to access
 /~junping_du/hadoop-2.6.3-RC0/hadoop-2.6.3-RC0-src.tar.gz
 on this server."

 - Sarjeet Singh

 On Sat, Dec 12, 2015 at 4:17 PM, Tsuyoshi Ozawa
 >
 wrote:
 Hi Junping,

 Thank you for starting the voting.
 I cannot access the tar.gz file because of permission error. Could you
 check the permission to access the files?

 Forbidden
 You don't have permission to access
 /~junping_du/hadoop-2.6.3-RC0/hadoop-2.6.3-RC0-src.tar.gz
 on this server.

 Thanks,
 - Tsuyoshi

 On Sat, Dec 12, 2015 at 9:16 AM, Junping Du
 >
 wrote:

 Hi all developers in hadoop community,
I've created a release candidate RC0 for Apache Hadoop 2.6.3 (the
 next maintenance release
 to follow up 2.6.2.) according to email thread of release plan 2.6.3
 [1]. Sorry for this RC
 coming a bit late as several blocker issues were getting committed
 until yesterday. Below
 is the details:

 The RC is available for validation at:
 *http://people.apache.org/~junping_du/hadoop-2.6.3-RC0/
 *

 The RC tag in git is: release-2.6.3-RC0

 The maven artifacts are staged via
 repository.apache.org<
 http://repository.apache.org>
 at:
 *
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1025/
>> ?
 <
>>> https://repository.apache.org/content/repositories/orgapachehadoop-1025/
> *


Build failed in Jenkins: Hadoop-Hdfs-trunk #2632

2015-12-16 Thread Apache Jenkins Server
See 

Changes:

[zxu] Update CHANGES.txt to move MAPREDUCE-6436 from YARN to MAPREDUCE

--
[...truncated 6179 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.739 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.18 sec - in 
org.apache.hadoop.hdfs.tools.TestGetGroups
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.086 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.675 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.801 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.047 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.507 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.917 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Running org.apache.hadoop.hdfs.TestDFSRename
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.685 sec - in 
org.apache.hadoop.hdfs.TestDFSRename
Running org.apache.hadoop.hdfs.TestLargeBlock
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.304 sec - in 
org.apache.hadoop.hdfs.TestLargeBlock
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.309 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure030
Running org.apache.hadoop.hdfs.TestDatanodeConfig
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.373 sec - in 
org.apache.hadoop.hdfs.TestDatanodeConfig
Running org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.6 sec - in 
org.apache.hadoop.hdfs.TestWriteConfigurationToDFS
Running org.apache.hadoop.hdfs.TestFileAppend2
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.085 sec - in 
org.apache.hadoop.hdfs.TestFileAppend2
Running org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.702 sec - in 
org.apache.hadoop.hdfs.protocolPB.TestPBHelper
Running org.apache.hadoop.hdfs.TestSetrepIncreasing
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.376 sec - in 
org.apache.hadoop.hdfs.TestSetrepIncreasing
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.307 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure200
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.308 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210
Running org.apache.hadoop.hdfs.TestDFSClientRetries
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 144.117 sec - 
in org.apache.hadoop.hdfs.TestDFSClientRetries
Running org.apache.hadoop.hdfs.TestBlockReaderLocal
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.077 sec - 
in org.apache.hadoop.hdfs.TestBlockReaderLocal
Running org.apache.hadoop.hdfs.TestHdfsAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.199 sec - in 
org.apache.hadoop.hdfs.TestHdfsAdmin
Running org.apache.hadoop.hdfs.TestDataTransferKeepalive
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.399 sec - in 
org.apache.hadoop.hdfs.TestDataTransferKeepalive
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.31 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000
Running org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.974 sec - in 
org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure
Running org.apache.hadoop.hdfs.TestEncryptedTransfer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 106.458 sec - 
in org.apache.hadoop.hdfs.TestEncryptedTransfer
Running org.apache.hadoop.hdfs.TestDatanodeDeath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.804 sec - in 
org.apache.hadoop.hdfs.TestDatanodeDeath
Running 

Hadoop-Hdfs-trunk - Build # 2632 - Still Failing

2015-12-16 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2632/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6372 lines...]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-hdfs-project 
---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/target/test-dir
[INFO] Executed tasks
[INFO] 
[INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:53 min]
[INFO] Apache Hadoop HDFS  FAILURE [  03:08 h]
[INFO] Apache Hadoop HDFS Native Client .. SKIPPED
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.059 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 03:12 h
[INFO] Finished at: 2015-12-16T10:20:03+00:00
[INFO] Final Memory: 56M/733M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



###
## FAILED TESTS (if any) 
##
1 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.startBackupNodeWithIncorrectAuthentication

Error Message:
Port in use: 0.0.0.0:50070

Stack Trace:
java.net.BindException: Port in use: 0.0.0.0:50070
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:444)
at sun.nio.ch.Net.bind(Net.java:436)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216)
at 
org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:905)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:847)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:822)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:675)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:884)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:863)
at 
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1565)
at