[jira] [Created] (HDFS-8685) Hadoop tools jars should be added in "dfs" class path.

2015-06-29 Thread Surendra Singh Lilhore (JIRA)
Surendra Singh Lilhore created HDFS-8685:


 Summary: Hadoop tools jars should be added in "dfs" class path.
 Key: HDFS-8685
 URL: https://issues.apache.org/jira/browse/HDFS-8685
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.7.0
Reporter: Surendra Singh Lilhore
Assignee: Surendra Singh Lilhore


{code}
./hdfs dfs -ls s3a://xyz:xyz/
-ls: Fatal internal error
java.lang.RuntimeException: java.lang.ClassNotFoundException: Class 
org.apache.hadoop.fs.s3a.S3AFileSystem not found
at 
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2224)
at 
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2638)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Vinod Kumar Vavilapalli
Hi all,

I've created a release candidate RC0 for Apache Hadoop 2.7.1.

As discussed before, this is the next stable release to follow up 2.6.0,
and the first stable one in the 2.7.x line.

The RC is available for validation at:
*http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
*

The RC tag in git is: release-2.7.1-RC0

The maven artifacts are available via repository.apache.org at
*https://repository.apache.org/content/repositories/orgapachehadoop-1019/
*

Please try the release and vote; the vote will run for the usual 5 days.

Thanks,
Vinod

PS: It took 2 months instead of the planned [1] 2 weeks in getting this
release out: post-mortem in a separate thread.

[1]: A 2.7.1 release to follow up 2.7.0
http://markmail.org/thread/zwzze6cqqgwq4rmw


[jira] [Created] (HDFS-8686) WebHdfsFileSystem#getXAttr(Path p, final String name) doesn't work if domain name is in bigcase

2015-06-29 Thread Jagadesh Kiran N (JIRA)
Jagadesh Kiran N created HDFS-8686:
--

 Summary: WebHdfsFileSystem#getXAttr(Path p, final String name) 
doesn't work if domain name is in bigcase
 Key: HDFS-8686
 URL: https://issues.apache.org/jira/browse/HDFS-8686
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jagadesh Kiran N


{code}  hadoop fs -getfattr -n USER.attr1 /dir1 {code} 
==> returns value


{code} webhdfs.getXAttr(new Path("/dir1"),"USER.attr1")) {code} 
==> returns null



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8687) Remove the duplicate usage message from Dfsck.java

2015-06-29 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-8687:
--

 Summary: Remove the duplicate usage message from Dfsck.java
 Key: HDFS-8687
 URL: https://issues.apache.org/jira/browse/HDFS-8687
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula


Toolrunner also give same usage message,, I think , we can remove

{{printUsage(System.err);}}
{code}
if ((args.length == 0) || ("-files".equals(args[0]))) {
  printUsage(System.err);
  ToolRunner.printGenericCommandUsage(System.err);
} 
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8688) replace shouldCheckForEnoughRacks with hasClusterEverBeenMultiRack

2015-06-29 Thread Walter Su (JIRA)
Walter Su created HDFS-8688:
---

 Summary: replace shouldCheckForEnoughRacks with 
hasClusterEverBeenMultiRack
 Key: HDFS-8688
 URL: https://issues.apache.org/jira/browse/HDFS-8688
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8689) move hasClusterEverBeenMultiRack to NetworkTopology

2015-06-29 Thread Walter Su (JIRA)
Walter Su created HDFS-8689:
---

 Summary: move hasClusterEverBeenMultiRack to NetworkTopology
 Key: HDFS-8689
 URL: https://issues.apache.org/jira/browse/HDFS-8689
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Steve Loughran

+1 binding from me.

Tests: 

Rebuild slider with Hadoop.version=2.7.1; ran all the tests including against a 
secure cluster. 
Repeated for windows running Java 8.

All tests passed


> On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli  wrote:
> 
> Hi all,
> 
> I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> 
> As discussed before, this is the next stable release to follow up 2.6.0,
> and the first stable one in the 2.7.x line.
> 
> The RC is available for validation at:
> *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> *
> 
> The RC tag in git is: release-2.7.1-RC0
> 
> The maven artifacts are available via repository.apache.org at
> *https://repository.apache.org/content/repositories/orgapachehadoop-1019/
> *
> 
> Please try the release and vote; the vote will run for the usual 5 days.
> 
> Thanks,
> Vinod
> 
> PS: It took 2 months instead of the planned [1] 2 weeks in getting this
> release out: post-mortem in a separate thread.
> 
> [1]: A 2.7.1 release to follow up 2.7.0
> http://markmail.org/thread/zwzze6cqqgwq4rmw



Hadoop-Hdfs-trunk - Build # 2171 - Failure

2015-06-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2171/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6911 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 46.503 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:41 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.065 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:42 h
[INFO] Finished at: 2015-06-29T14:25:58+00:00
[INFO] Final Memory: 63M/665M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2170
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 381367 bytes
Compression is 0.0%
Took 8.9 sec
Recording test results
Updating HDFS-8628
Updating HADOOP-12009
Updating YARN-3860
Updating HDFS-8586
Updating HDFS-8681
Updating HADOOP-12119
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
5 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST

Error Message:
dir has ERROR

Stack Trace:
java.lang.IllegalStateException: dir has ERROR
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.checkErrorState(TestAppendSnapshotTruncate.java:430)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.stop(TestAppendSnapshotTruncate.java:483)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST(TestAppendSnapshotTruncate.java:128)
Caused by: java.lang.IllegalStateException: null
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.pause(TestAppendSnapshotTruncate.java:479)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.pauseAllFiles(TestAppendSnapshotTruncate.java:247)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:220)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:140)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker$1.run(TestAppendSnapshotTruncate.java:454)
at java.lang.Thread.run(Thread.java:745)


REGRESSION:  org.apache.hadoop.hdfs.TestHDFSFileSystemContract.testListStatus

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
at junit.framework.Assert.fail(Assert.java:55)
at junit.framework.Assert.assertTrue(Assert.java:22)
at junit.framework.Assert.assertTrue(Assert.java:31)
at junit.framework.TestCase.assertTrue(TestCase.java:201)
at 
org

Build failed in Jenkins: Hadoop-Hdfs-trunk #2171

2015-06-29 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12009 Clarify FileSystem.listStatus() sorting order & fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel)

[arp] HDFS-8681. BlockScanner is incorrectly disabled by default. (Contributed 
by Arpit Agarwal)

[vinodkv] Adding release 2.7.2 to CHANGES.txt.

[junping_du] YARN-3860. rmadmin -transitionToActive should check the state of 
non-target node. (Contributed by Masatake Iwasaki)

[vinayakumarb] HDFS-8586. Dead Datanode is allocated for write when client is 
from deadnode (Contributed by Brahma Reddy Battula)

[vinayakumarb] HADOOP-12119. hadoop fs -expunge does not work for federated 
namespace (Contributed by J.Andreina)

[vinayakumarb] HDFS-8628. Update missing command option for fetchdt 
(Contributed by J.Andreina)

--
[...truncated 6718 lines...]
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.118 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.416 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.886 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.249 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.007 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.218 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.471 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.387 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDataTransferProtocol
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.133 sec - in 
org.apache.hadoop.hdfs.TestDataTransferProtocol
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.766 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.701 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.249 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.913 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 14, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 124.956 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.18 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.559 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.253 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, T

Hadoop-Hdfs-trunk-Java8 - Build # 232 - Still Failing

2015-06-29 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/232/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7442 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [ 52.344 s]
[INFO] Apache Hadoop HDFS  FAILURE [  02:50 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.054 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:51 h
[INFO] Finished at: 2015-06-29T14:39:26+00:00
[INFO] Final Memory: 52M/265M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 829576 bytes
Compression is 0.0%
Took 17 sec
Recording test results
Updating HDFS-8628
Updating HADOOP-12009
Updating YARN-3860
Updating HDFS-8586
Updating HDFS-8681
Updating HADOOP-12119
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
4 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST

Error Message:
dir has ERROR

Stack Trace:
java.lang.IllegalStateException: dir has ERROR
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.checkErrorState(TestAppendSnapshotTruncate.java:430)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.stop(TestAppendSnapshotTruncate.java:483)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate.testAST(TestAppendSnapshotTruncate.java:128)
Caused by: java.lang.IllegalStateException: null
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:129)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker.pause(TestAppendSnapshotTruncate.java:479)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.pauseAllFiles(TestAppendSnapshotTruncate.java:247)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:220)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$DirWorker.call(TestAppendSnapshotTruncate.java:140)
at 
org.apache.hadoop.hdfs.TestAppendSnapshotTruncate$Worker$1.run(TestAppendSnapshotTruncate.java:454)
at java.lang.Thread.run(Thread.java:744)


REGRESSION:  org.apache.hadoop.hdfs.TestHDFSFileSystemContract.testListStatus

Error Message:
null

Stack Trace:
junit.framework.AssertionFailedError: null
at junit.framework.Assert.fail(Assert.java:55)
at junit.framework.Assert.assertTrue(Assert.java:22)
at junit.framework.Assert.assertTrue(Assert.java:31)
at junit

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #232

2015-06-29 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-12009 Clarify FileSystem.listStatus() sorting order & fix  
FileSystemContractBaseTest:testListStatus. (J.Andreina via stevel)

[arp] HDFS-8681. BlockScanner is incorrectly disabled by default. (Contributed 
by Arpit Agarwal)

[vinodkv] Adding release 2.7.2 to CHANGES.txt.

[junping_du] YARN-3860. rmadmin -transitionToActive should check the state of 
non-target node. (Contributed by Masatake Iwasaki)

[vinayakumarb] HDFS-8586. Dead Datanode is allocated for write when client is 
from deadnode (Contributed by Brahma Reddy Battula)

[vinayakumarb] HADOOP-12119. hadoop fs -expunge does not work for federated 
namespace (Contributed by J.Andreina)

[vinayakumarb] HDFS-8628. Update missing command option for fetchdt 
(Contributed by J.Andreina)

--
[...truncated 7249 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.199 sec - 
in org.apache.hadoop.hdfs.crypto.TestHdfsCryptoStreams
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileAppendRestart
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.984 sec - in 
org.apache.hadoop.hdfs.TestFileAppendRestart
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFetchImage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.223 sec - in 
org.apache.hadoop.hdfs.TestFetchImage
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestFileCreationDelete
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.583 sec - in 
org.apache.hadoop.hdfs.TestFileCreationDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestCrcCorruption
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.23 sec - in 
org.apache.hadoop.hdfs.TestCrcCorruption
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 67.545 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
testListStatus(org.apache.hadoop.hdfs.TestHDFSFileSystemContract)  Time 
elapsed: 0.959 sec  <<< FAILURE!
junit.framework.AssertionFailedError: null
at junit.framework.Assert.fail(Assert.java:55)
at junit.framework.Assert.assertTrue(Assert.java:22)
at junit.framework.Assert.assertTrue(Assert.java:31)
at junit.framework.TestCase.assertTrue(TestCase.java:201)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestBlockStoragePolicy
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.595 sec - 
in org.apache.hadoop.hdfs.TestBlockStoragePolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring opt

[jira] [Created] (HDFS-8690) LeaseRenewer should not abort DFSClient when renew fails

2015-06-29 Thread Chang Li (JIRA)
Chang Li created HDFS-8690:
--

 Summary: LeaseRenewer should not abort DFSClient when renew fails
 Key: HDFS-8690
 URL: https://issues.apache.org/jira/browse/HDFS-8690
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li


The lease renewer special cases SocketTimeoutExceptions to abort the DFSClient. 
 Aborting causes the client to be permanently unusable, which causes filesystem 
instances to stop working.  All other IOExceptions do not abort.  The special 
case should be removed and/or abort should not completely shutdown the proxy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8691) Cleanup BlockScanner initialization and add test for configuration contract

2015-06-29 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-8691:
---

 Summary: Cleanup BlockScanner initialization and add test for 
configuration contract
 Key: HDFS-8691
 URL: https://issues.apache.org/jira/browse/HDFS-8691
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, test
Reporter: Arpit Agarwal


The initialization of the BlockScanner can be simplified by moving out test 
hooks. Tests can be modified to use configuration only.

Also we need an additional test case to verify the behavior with positive, 
negative and zero values of {{dfs.datanode.scan.period.hours}} for 
compatibility.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8692) Fix test case failures o.a.h.h.TestHDFSFileSystemContract and TestWebHdfsFileSystemContract.testListStatus

2015-06-29 Thread Brahma Reddy Battula (JIRA)
Brahma Reddy Battula created HDFS-8692:
--

 Summary: Fix test case failures o.a.h.h.TestHDFSFileSystemContract 
and TestWebHdfsFileSystemContract.testListStatus
 Key: HDFS-8692
 URL: https://issues.apache.org/jira/browse/HDFS-8692
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula



 *Jenkin Report* 
https://builds.apache.org/job/PreCommit-HDFS-Build/11529/testReport/

 *Error Log* 

{noformat}
junit.framework.AssertionFailedError: null
at junit.framework.Assert.fail(Assert.java:55)
at junit.framework.Assert.assertTrue(Assert.java:22)
at junit.framework.Assert.assertTrue(Assert.java:31)
at junit.framework.TestCase.assertTrue(TestCase.java:201)
at 
org.apache.hadoop.fs.FileSystemContractBaseTest.testListStatus(FileSystemContractBaseTest.java:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:176)
at junit.framework.TestCase.runBare(TestCase.java:141)
at junit.framework.TestResult$1.protect(TestResult.java:122)
at junit.framework.TestResult.runProtected(TestResult.java:142)
at junit.framework.TestResult.run(TestResult.java:125)
at junit.framework.TestCase.run(TestCase.java:129)
at junit.framework.TestSuite.runTest(TestSuite.java:255)
at junit.framework.TestSuite.run(TestSuite.java:250)
at 
org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:84)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Ted Yu
+1 (non-binding)

Compiled hbase branch-1 with Java 1.8.0_45
Ran unit test suite which passed.

On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran 
wrote:

>
> +1 binding from me.
>
> Tests:
>
> Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
> against a secure cluster.
> Repeated for windows running Java 8.
>
> All tests passed
>
>
> > On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli 
> wrote:
> >
> > Hi all,
> >
> > I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> >
> > As discussed before, this is the next stable release to follow up 2.6.0,
> > and the first stable one in the 2.7.x line.
> >
> > The RC is available for validation at:
> > *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> > *
> >
> > The RC tag in git is: release-2.7.1-RC0
> >
> > The maven artifacts are available via repository.apache.org at
> > *
> https://repository.apache.org/content/repositories/orgapachehadoop-1019/
> > <
> https://repository.apache.org/content/repositories/orgapachehadoop-1019/>*
> >
> > Please try the release and vote; the vote will run for the usual 5 days.
> >
> > Thanks,
> > Vinod
> >
> > PS: It took 2 months instead of the planned [1] 2 weeks in getting this
> > release out: post-mortem in a separate thread.
> >
> > [1]: A 2.7.1 release to follow up 2.7.0
> > http://markmail.org/thread/zwzze6cqqgwq4rmw
>
>


[jira] [Created] (HDFS-8693) refreshNamenodes does not support adding a new standby to a running DN

2015-06-29 Thread Jian Fang (JIRA)
Jian Fang created HDFS-8693:
---

 Summary: refreshNamenodes does not support adding a new standby to 
a running DN
 Key: HDFS-8693
 URL: https://issues.apache.org/jira/browse/HDFS-8693
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Jian Fang
Priority: Critical


I tried to run the following command on a Hadoop 2.6.0 cluster with HA support 

$ hdfs dfsadmin -refreshNamenodes datanode-host:port

to refresh name nodes on data nodes after I replaced one name node with a new 
one so that I don't need to restart the data nodes. However, I got the 
following error:

refreshNamenodes: HA does not currently support adding a new standby to a 
running DN. Please do a rolling restart of DNs to reconfigure the list of NNs.

I checked the 2.6.0 code and the error was thrown by the following code 
snippet, which led me to this JIRA.

void refreshNNList(ArrayList addrs) throws IOException {
Set oldAddrs = Sets.newHashSet();
for (BPServiceActor actor : bpServices)
{ oldAddrs.add(actor.getNNSocketAddress()); }
Set newAddrs = Sets.newHashSet(addrs);
if (!Sets.symmetricDifference(oldAddrs, newAddrs).isEmpty())
{ // Keep things simple for now -- we can implement this at a later date. throw 
new IOException( "HA does not currently support adding a new standby to a 
running DN. " + "Please do a rolling restart of DNs to reconfigure the list of 
NNs."); }
}

Looks like this the refreshNameNodes command is an uncompleted feature. 

Unfortunately, the new name node on a replacement is critical for auto 
provisioning a hadoop cluster with HDFS HA support. Without this support, the 
HA feature could not really be used. I also observed that the new standby name 
node on the replacement instance could stuck in safe mode because no data nodes 
check in with it. Even with a rolling restart, it may take quite some time to 
restart all data nodes if we have a big cluster, for example, with 4000 data 
nodes, let alone restarting DN is way too intrusive and it is not a preferable 
operation in production. It also increases the chance for a double failure 
because the standby name node is not really ready for a failover in the case 
that the current active name node fails. 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Wangda Tan
+1 (non-binding)

Compiled and deployed a single node cluster, tried to change node labels
and run distributed_shell with node label specified.

On Mon, Jun 29, 2015 at 10:30 AM, Ted Yu  wrote:

> +1 (non-binding)
>
> Compiled hbase branch-1 with Java 1.8.0_45
> Ran unit test suite which passed.
>
> On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran 
> wrote:
>
> >
> > +1 binding from me.
> >
> > Tests:
> >
> > Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
> > against a secure cluster.
> > Repeated for windows running Java 8.
> >
> > All tests passed
> >
> >
> > > On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli 
> > wrote:
> > >
> > > Hi all,
> > >
> > > I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> > >
> > > As discussed before, this is the next stable release to follow up
> 2.6.0,
> > > and the first stable one in the 2.7.x line.
> > >
> > > The RC is available for validation at:
> > > *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> > > *
> > >
> > > The RC tag in git is: release-2.7.1-RC0
> > >
> > > The maven artifacts are available via repository.apache.org at
> > > *
> > https://repository.apache.org/content/repositories/orgapachehadoop-1019/
> > > <
> > https://repository.apache.org/content/repositories/orgapachehadoop-1019/
> >*
> > >
> > > Please try the release and vote; the vote will run for the usual 5
> days.
> > >
> > > Thanks,
> > > Vinod
> > >
> > > PS: It took 2 months instead of the planned [1] 2 weeks in getting this
> > > release out: post-mortem in a separate thread.
> > >
> > > [1]: A 2.7.1 release to follow up 2.7.0
> > > http://markmail.org/thread/zwzze6cqqgwq4rmw
> >
> >
>


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Lei Xu
+1 binding

Downloaded src and bin distribution, verified md5, sha1 and sha256
checksums of both tar files.
Built src using mvn package.
Ran a pseudo HDFS cluster
Ran dfs -put some files, and checked files on NN's web interface.



On Mon, Jun 29, 2015 at 11:54 AM, Wangda Tan  wrote:
> +1 (non-binding)
>
> Compiled and deployed a single node cluster, tried to change node labels
> and run distributed_shell with node label specified.
>
> On Mon, Jun 29, 2015 at 10:30 AM, Ted Yu  wrote:
>
>> +1 (non-binding)
>>
>> Compiled hbase branch-1 with Java 1.8.0_45
>> Ran unit test suite which passed.
>>
>> On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran 
>> wrote:
>>
>> >
>> > +1 binding from me.
>> >
>> > Tests:
>> >
>> > Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
>> > against a secure cluster.
>> > Repeated for windows running Java 8.
>> >
>> > All tests passed
>> >
>> >
>> > > On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli 
>> > wrote:
>> > >
>> > > Hi all,
>> > >
>> > > I've created a release candidate RC0 for Apache Hadoop 2.7.1.
>> > >
>> > > As discussed before, this is the next stable release to follow up
>> 2.6.0,
>> > > and the first stable one in the 2.7.x line.
>> > >
>> > > The RC is available for validation at:
>> > > *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
>> > > *
>> > >
>> > > The RC tag in git is: release-2.7.1-RC0
>> > >
>> > > The maven artifacts are available via repository.apache.org at
>> > > *
>> > https://repository.apache.org/content/repositories/orgapachehadoop-1019/
>> > > <
>> > https://repository.apache.org/content/repositories/orgapachehadoop-1019/
>> >*
>> > >
>> > > Please try the release and vote; the vote will run for the usual 5
>> days.
>> > >
>> > > Thanks,
>> > > Vinod
>> > >
>> > > PS: It took 2 months instead of the planned [1] 2 weeks in getting this
>> > > release out: post-mortem in a separate thread.
>> > >
>> > > [1]: A 2.7.1 release to follow up 2.7.0
>> > > http://markmail.org/thread/zwzze6cqqgwq4rmw
>> >
>> >
>>



-- 
Lei (Eddy) Xu
Software Engineer, Cloudera


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Arpit Gupta
+1 (non binding)

We have been testing rolling upgrades and downgrades from 2.6 to this release 
and have had successful runs. 

--
Arpit Gupta
Hortonworks Inc.
http://hortonworks.com/

> On Jun 29, 2015, at 12:45 PM, Lei Xu  wrote:
> 
> +1 binding
> 
> Downloaded src and bin distribution, verified md5, sha1 and sha256
> checksums of both tar files.
> Built src using mvn package.
> Ran a pseudo HDFS cluster
> Ran dfs -put some files, and checked files on NN's web interface.
> 
> 
> 
> On Mon, Jun 29, 2015 at 11:54 AM, Wangda Tan  wrote:
>> +1 (non-binding)
>> 
>> Compiled and deployed a single node cluster, tried to change node labels
>> and run distributed_shell with node label specified.
>> 
>> On Mon, Jun 29, 2015 at 10:30 AM, Ted Yu  wrote:
>> 
>>> +1 (non-binding)
>>> 
>>> Compiled hbase branch-1 with Java 1.8.0_45
>>> Ran unit test suite which passed.
>>> 
>>> On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran 
>>> wrote:
>>> 
 
 +1 binding from me.
 
 Tests:
 
 Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
 against a secure cluster.
 Repeated for windows running Java 8.
 
 All tests passed
 
 
> On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli 
 wrote:
> 
> Hi all,
> 
> I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> 
> As discussed before, this is the next stable release to follow up
>>> 2.6.0,
> and the first stable one in the 2.7.x line.
> 
> The RC is available for validation at:
> *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> *
> 
> The RC tag in git is: release-2.7.1-RC0
> 
> The maven artifacts are available via repository.apache.org at
> *
 https://repository.apache.org/content/repositories/orgapachehadoop-1019/
> <
 https://repository.apache.org/content/repositories/orgapachehadoop-1019/
 *
> 
> Please try the release and vote; the vote will run for the usual 5
>>> days.
> 
> Thanks,
> Vinod
> 
> PS: It took 2 months instead of the planned [1] 2 weeks in getting this
> release out: post-mortem in a separate thread.
> 
> [1]: A 2.7.1 release to follow up 2.7.0
> http://markmail.org/thread/zwzze6cqqgwq4rmw
 
 
>>> 
> 
> 
> 
> -- 
> Lei (Eddy) Xu
> Software Engineer, Cloudera
> 



Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Xuan Gong
+1 (non-binding)

Compiled and deployed a single node cluster, ran all the tests.


Xuan Gong

On 6/29/15, 1:03 PM, "Arpit Gupta"  wrote:

>+1 (non binding)
>
>We have been testing rolling upgrades and downgrades from 2.6 to this
>release and have had successful runs.
>
>--
>Arpit Gupta
>Hortonworks Inc.
>http://hortonworks.com/
>
>> On Jun 29, 2015, at 12:45 PM, Lei Xu  wrote:
>> 
>> +1 binding
>> 
>> Downloaded src and bin distribution, verified md5, sha1 and sha256
>> checksums of both tar files.
>> Built src using mvn package.
>> Ran a pseudo HDFS cluster
>> Ran dfs -put some files, and checked files on NN's web interface.
>> 
>> 
>> 
>> On Mon, Jun 29, 2015 at 11:54 AM, Wangda Tan 
>>wrote:
>>> +1 (non-binding)
>>> 
>>> Compiled and deployed a single node cluster, tried to change node
>>>labels
>>> and run distributed_shell with node label specified.
>>> 
>>> On Mon, Jun 29, 2015 at 10:30 AM, Ted Yu  wrote:
>>> 
 +1 (non-binding)
 
 Compiled hbase branch-1 with Java 1.8.0_45
 Ran unit test suite which passed.
 
 On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran

 wrote:
 
> 
> +1 binding from me.
> 
> Tests:
> 
> Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
> against a secure cluster.
> Repeated for windows running Java 8.
> 
> All tests passed
> 
> 
>> On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli
>>
> wrote:
>> 
>> Hi all,
>> 
>> I've created a release candidate RC0 for Apache Hadoop 2.7.1.
>> 
>> As discussed before, this is the next stable release to follow up
 2.6.0,
>> and the first stable one in the 2.7.x line.
>> 
>> The RC is available for validation at:
>> *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
>> *
>> 
>> The RC tag in git is: release-2.7.1-RC0
>> 
>> The maven artifacts are available via repository.apache.org at
>> *
> 
>https://repository.apache.org/content/repositories/orgapachehadoop-101
>9/
>> <
> 
>https://repository.apache.org/content/repositories/orgapachehadoop-101
>9/
> *
>> 
>> Please try the release and vote; the vote will run for the usual 5
 days.
>> 
>> Thanks,
>> Vinod
>> 
>> PS: It took 2 months instead of the planned [1] 2 weeks in getting
>>this
>> release out: post-mortem in a separate thread.
>> 
>> [1]: A 2.7.1 release to follow up 2.7.0
>> http://markmail.org/thread/zwzze6cqqgwq4rmw
> 
> 
 
>> 
>> 
>> 
>> -- 
>> Lei (Eddy) Xu
>> Software Engineer, Cloudera
>> 
>
>



[jira] [Created] (HDFS-8695) OzoneHandler : Add Bucket REST Interface

2015-06-29 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-8695:
--

 Summary: OzoneHandler : Add Bucket REST Interface
 Key: HDFS-8695
 URL: https://issues.apache.org/jira/browse/HDFS-8695
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Reporter: Anu Engineer
Assignee: Anu Engineer


Add Bucket REST interface into Ozone server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8696) Small reads are blocked by large long running reads

2015-06-29 Thread Xiaobing Zhou (JIRA)
Xiaobing Zhou created HDFS-8696:
---

 Summary: Small reads are blocked by large long running reads
 Key: HDFS-8696
 URL: https://issues.apache.org/jira/browse/HDFS-8696
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.6.0
Reporter: Xiaobing Zhou
Assignee: Xiaobing Zhou
Priority: Blocker


There is an issue that appears related to the webhdfs server. When making two 
concurrent requests, the DN will sometimes pause for extended periods (I've 
seen 1-300 seconds), killing performance and dropping connections. 

To reproduce: 
1. set up a HDFS cluster
2. Upload a large file (I was using 10GB). Perform 1-byte reads, writing
the time out to /tmp/times.txt
{noformat}
i=1
while (true); do 
echo $i
let i++
/usr/bin/time -f %e -o /tmp/times.txt -a curl -s -L -o /dev/null 
"http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN&user.name=root&length=1";
done
{noformat}

3. Watch for 1-byte requests that take more than one second:
tail -F /tmp/times.txt | grep -E "^[^0]"

4. After it has had a chance to warm up, start doing large transfers from
another shell:
{noformat}
i=1
while (true); do 
echo $i
let i++
(/usr/bin/time -f %e curl -s -L -o /dev/null 
"http://:50070/webhdfs/v1/tmp/bigfile?op=OPEN&user.name=root");
done
{noformat}

It's easy to find after a minute or two that small reads will sometimes
pause for 1-300 seconds. In some extreme cases, it appears that the
transfers timeout and the DN drops the connection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8697) Refactor DecommissionManager: more generic method names and misc cleanup

2015-06-29 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8697:
---

 Summary: Refactor DecommissionManager: more generic method names 
and misc cleanup
 Key: HDFS-8697
 URL: https://issues.apache.org/jira/browse/HDFS-8697
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: namenode
Affects Versions: 2.7.0
Reporter: Zhe Zhang
Assignee: Zhe Zhang


This JIRA merges the changes in {{DecommissionManager}} from the HDFS-7285 
branch, including changing a few method names to be more generic 
({{replicated}} -> {{stored}}), and some cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8698) Add "-direct" flag option for fs copy so that user can choose not to create "._COPYING_" file

2015-06-29 Thread Chen He (JIRA)
Chen He created HDFS-8698:
-

 Summary: Add "-direct" flag option for fs copy so that user can 
choose not to create "._COPYING_" file
 Key: HDFS-8698
 URL: https://issues.apache.org/jira/browse/HDFS-8698
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.7.0
Reporter: Chen He


Because CLI is using CommandWithDestination.java which add "._COPYING_" to the 
tail of file name when it does the copy. For blobstore like S3 and Swift, to 
create "._COPYING_" file and rename it is expensive. "-direct" flag can allow 
user to avoiding the "._COPYING_" file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Devaraj K
+1 (non-binding)

Deployed in a 3 node cluster and ran some Yarn Apps and MR examples, works
fine.


On Tue, Jun 30, 2015 at 1:46 AM, Xuan Gong  wrote:

> +1 (non-binding)
>
> Compiled and deployed a single node cluster, ran all the tests.
>
>
> Xuan Gong
>
> On 6/29/15, 1:03 PM, "Arpit Gupta"  wrote:
>
> >+1 (non binding)
> >
> >We have been testing rolling upgrades and downgrades from 2.6 to this
> >release and have had successful runs.
> >
> >--
> >Arpit Gupta
> >Hortonworks Inc.
> >http://hortonworks.com/
> >
> >> On Jun 29, 2015, at 12:45 PM, Lei Xu  wrote:
> >>
> >> +1 binding
> >>
> >> Downloaded src and bin distribution, verified md5, sha1 and sha256
> >> checksums of both tar files.
> >> Built src using mvn package.
> >> Ran a pseudo HDFS cluster
> >> Ran dfs -put some files, and checked files on NN's web interface.
> >>
> >>
> >>
> >> On Mon, Jun 29, 2015 at 11:54 AM, Wangda Tan 
> >>wrote:
> >>> +1 (non-binding)
> >>>
> >>> Compiled and deployed a single node cluster, tried to change node
> >>>labels
> >>> and run distributed_shell with node label specified.
> >>>
> >>> On Mon, Jun 29, 2015 at 10:30 AM, Ted Yu  wrote:
> >>>
>  +1 (non-binding)
> 
>  Compiled hbase branch-1 with Java 1.8.0_45
>  Ran unit test suite which passed.
> 
>  On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran
> 
>  wrote:
> 
> >
> > +1 binding from me.
> >
> > Tests:
> >
> > Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
> > against a secure cluster.
> > Repeated for windows running Java 8.
> >
> > All tests passed
> >
> >
> >> On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli
> >>
> > wrote:
> >>
> >> Hi all,
> >>
> >> I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> >>
> >> As discussed before, this is the next stable release to follow up
>  2.6.0,
> >> and the first stable one in the 2.7.x line.
> >>
> >> The RC is available for validation at:
> >> *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> >> *
> >>
> >> The RC tag in git is: release-2.7.1-RC0
> >>
> >> The maven artifacts are available via repository.apache.org at
> >> *
> >
> >
> https://repository.apache.org/content/repositories/orgapachehadoop-101
> >9/
> >> <
> >
> >
> https://repository.apache.org/content/repositories/orgapachehadoop-101
> >9/
> > *
> >>
> >> Please try the release and vote; the vote will run for the usual 5
>  days.
> >>
> >> Thanks,
> >> Vinod
> >>
> >> PS: It took 2 months instead of the planned [1] 2 weeks in getting
> >>this
> >> release out: post-mortem in a separate thread.
> >>
> >> [1]: A 2.7.1 release to follow up 2.7.0
> >> http://markmail.org/thread/zwzze6cqqgwq4rmw
> >
> >
> 
> >>
> >>
> >>
> >> --
> >> Lei (Eddy) Xu
> >> Software Engineer, Cloudera
> >>
> >
> >
>
>


-- 


Thanks
Devaraj K


Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

2015-06-29 Thread Zhijie Shen
+1 (binding)

Built from source, deployed a single node cluster and tried some MR jobs.

- Zhijie

From: Devaraj K 
Sent: Monday, June 29, 2015 9:24 PM
To: common-...@hadoop.apache.org
Cc: hdfs-dev@hadoop.apache.org; yarn-...@hadoop.apache.org; 
mapreduce-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.7.1 RC0

+1 (non-binding)

Deployed in a 3 node cluster and ran some Yarn Apps and MR examples, works
fine.


On Tue, Jun 30, 2015 at 1:46 AM, Xuan Gong  wrote:

> +1 (non-binding)
>
> Compiled and deployed a single node cluster, ran all the tests.
>
>
> Xuan Gong
>
> On 6/29/15, 1:03 PM, "Arpit Gupta"  wrote:
>
> >+1 (non binding)
> >
> >We have been testing rolling upgrades and downgrades from 2.6 to this
> >release and have had successful runs.
> >
> >--
> >Arpit Gupta
> >Hortonworks Inc.
> >http://hortonworks.com/
> >
> >> On Jun 29, 2015, at 12:45 PM, Lei Xu  wrote:
> >>
> >> +1 binding
> >>
> >> Downloaded src and bin distribution, verified md5, sha1 and sha256
> >> checksums of both tar files.
> >> Built src using mvn package.
> >> Ran a pseudo HDFS cluster
> >> Ran dfs -put some files, and checked files on NN's web interface.
> >>
> >>
> >>
> >> On Mon, Jun 29, 2015 at 11:54 AM, Wangda Tan 
> >>wrote:
> >>> +1 (non-binding)
> >>>
> >>> Compiled and deployed a single node cluster, tried to change node
> >>>labels
> >>> and run distributed_shell with node label specified.
> >>>
> >>> On Mon, Jun 29, 2015 at 10:30 AM, Ted Yu  wrote:
> >>>
>  +1 (non-binding)
> 
>  Compiled hbase branch-1 with Java 1.8.0_45
>  Ran unit test suite which passed.
> 
>  On Mon, Jun 29, 2015 at 7:22 AM, Steve Loughran
> 
>  wrote:
> 
> >
> > +1 binding from me.
> >
> > Tests:
> >
> > Rebuild slider with Hadoop.version=2.7.1; ran all the tests including
> > against a secure cluster.
> > Repeated for windows running Java 8.
> >
> > All tests passed
> >
> >
> >> On 29 Jun 2015, at 09:45, Vinod Kumar Vavilapalli
> >>
> > wrote:
> >>
> >> Hi all,
> >>
> >> I've created a release candidate RC0 for Apache Hadoop 2.7.1.
> >>
> >> As discussed before, this is the next stable release to follow up
>  2.6.0,
> >> and the first stable one in the 2.7.x line.
> >>
> >> The RC is available for validation at:
> >> *http://people.apache.org/~vinodkv/hadoop-2.7.1-RC0/
> >> *
> >>
> >> The RC tag in git is: release-2.7.1-RC0
> >>
> >> The maven artifacts are available via repository.apache.org at
> >> *
> >
> >
> https://repository.apache.org/content/repositories/orgapachehadoop-101
> >9/
> >> <
> >
> >
> https://repository.apache.org/content/repositories/orgapachehadoop-101
> >9/
> > *
> >>
> >> Please try the release and vote; the vote will run for the usual 5
>  days.
> >>
> >> Thanks,
> >> Vinod
> >>
> >> PS: It took 2 months instead of the planned [1] 2 weeks in getting
> >>this
> >> release out: post-mortem in a separate thread.
> >>
> >> [1]: A 2.7.1 release to follow up 2.7.0
> >> http://markmail.org/thread/zwzze6cqqgwq4rmw
> >
> >
> 
> >>
> >>
> >>
> >> --
> >> Lei (Eddy) Xu
> >> Software Engineer, Cloudera
> >>
> >
> >
>
>


--


Thanks
Devaraj K