[jira] [Commented] (HDFS-7302) namenode -rollingUpgrade downgrade may finalize a rolling upgrade

2015-02-19 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327090#comment-14327090
 ] 

Tsz Wo Nicholas Sze commented on HDFS-7302:
---

Yes.  We should print a warning message when -rollingUpgrade downgrade is 
used.  The message could simply point to the documentation.

 namenode -rollingUpgrade downgrade may finalize a rolling upgrade
 -

 Key: HDFS-7302
 URL: https://issues.apache.org/jira/browse/HDFS-7302
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Kai Sasaki

 The namenode startup option -rollingUpgrade downgrade is originally 
 designed for downgrading cluster.  However, running namenode -rollingUpgrade 
 downgrade with the new software could result in finalizing the ongoing 
 rolling upgrade.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327116#comment-14327116
 ] 

Konstantin Shvachko commented on HDFS-7656:
---

Looks good to me, including the {{concat}} corrections. +1
[~tucu00] do you want to take a look here?

 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 3.0.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5688) Wire-encription in QJM

2015-02-19 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated HDFS-5688:
--
Priority: Major  (was: Blocker)

 Wire-encription in QJM
 --

 Key: HDFS-5688
 URL: https://issues.apache.org/jira/browse/HDFS-5688
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, journal-node, security
Affects Versions: 2.2.0
Reporter: Juan Carlos Fernandez
  Labels: security
 Attachments: core-site.xml, hdfs-site.xml, jaas.conf, journal.xml, 
 namenode.xml, ssl-client.xml, ssl-server.xml


 When HA is implemented with QJM and using kerberos, it's not possible to set 
 wire-encrypted data.
 If it's set property hadoop.rpc.protection to something different to 
 authentication it doesn't work propertly, getting the error:
 ERROR security.UserGroupInformation: PriviledgedActionException 
 as:principal@REALM (auth:KERBEROS) cause:javax.security.sasl.SaslException: 
 No common protection layer between client and ser
 With NFS as shared storage everything works like a charm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5688) Wire-encription in QJM

2015-02-19 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327150#comment-14327150
 ] 

Harsh J commented on HDFS-5688:
---

This seems to be working just fine on my 2.5.0 cluster. Both JN and NN have the 
same hadoop.rpc.protection configs and thereby avoids the error.

Unless you're still facing this [~jucaf], I'd propose we close this as 'Cannot 
Reproduce'.

 Wire-encription in QJM
 --

 Key: HDFS-5688
 URL: https://issues.apache.org/jira/browse/HDFS-5688
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, journal-node, security
Affects Versions: 2.2.0
Reporter: Juan Carlos Fernandez
Priority: Blocker
  Labels: security
 Attachments: core-site.xml, hdfs-site.xml, jaas.conf, journal.xml, 
 namenode.xml, ssl-client.xml, ssl-server.xml


 When HA is implemented with QJM and using kerberos, it's not possible to set 
 wire-encrypted data.
 If it's set property hadoop.rpc.protection to something different to 
 authentication it doesn't work propertly, getting the error:
 ERROR security.UserGroupInformation: PriviledgedActionException 
 as:principal@REALM (auth:KERBEROS) cause:javax.security.sasl.SaslException: 
 No common protection layer between client and ser
 With NFS as shared storage everything works like a charm



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7813) TestDFSHAAdminMiniCluster#testFencer testcase is failing ferquently

2015-02-19 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327102#comment-14327102
 ] 

Rakesh R commented on HDFS-7813:


Thanks [~brahmareddy]

 TestDFSHAAdminMiniCluster#testFencer testcase is failing ferquently
 ---

 Key: HDFS-7813
 URL: https://issues.apache.org/jira/browse/HDFS-7813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7813-001.patch


 Following is the failure trace.
 {code}
 java.lang.AssertionError: expected:0 but was:-1
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer(TestDFSHAAdminMiniCluster.java:163)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7788) Post-2.6 namenode may not start up with an image containing inodes created with an old release.

2015-02-19 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HDFS-7788:
-
Attachment: rushabh.patch

I turned Rushabh's patch into binary patch using the regular diff. Let's see 
if the precommit can handle this.

 Post-2.6 namenode may not start up with an image containing inodes created 
 with an old release.
 ---

 Key: HDFS-7788
 URL: https://issues.apache.org/jira/browse/HDFS-7788
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
Priority: Blocker
 Attachments: HDFS-7788-binary.patch, rushabh.patch


 Before HDFS-4305, which was fixed in 2.1.0-beta, clients could specify 
 arbitrarily small preferred block size for a file including 0. This was 
 normally done by faulty clients or failed creates, but it was possible.
 Until 2.5, reading a fsimage containing inodes with 0 byte preferred block 
 size was allowed. So if a fsimage contained such an inode, the namenode would 
 come up fine.  In 2.6, the preferred block size is required be  0. Because 
 of this change, the image that worked with 2.5 may not work with 2.6.
 If a cluster ran a version of hadoop earlier than 2.1.0-beta before, it is 
 under this risk even if it worked fine with 2.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7806:
-
Attachment: HDFS-7806.01.patch

Thanks [~arpitagarwal] for the review. I've updated the patch based on the 
feedback.


 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7788) Post-2.6 namenode may not start up with an image containing inodes created with an old release.

2015-02-19 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328103#comment-14328103
 ] 

Kihwal Lee commented on HDFS-7788:
--

The patch was applied fine by the precommit.
https://builds.apache.org/job/PreCommit-HDFS-Build/9621

 Post-2.6 namenode may not start up with an image containing inodes created 
 with an old release.
 ---

 Key: HDFS-7788
 URL: https://issues.apache.org/jira/browse/HDFS-7788
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
Priority: Blocker
 Attachments: HDFS-7788-binary.patch, rushabh.patch


 Before HDFS-4305, which was fixed in 2.1.0-beta, clients could specify 
 arbitrarily small preferred block size for a file including 0. This was 
 normally done by faulty clients or failed creates, but it was possible.
 Until 2.5, reading a fsimage containing inodes with 0 byte preferred block 
 size was allowed. So if a fsimage contained such an inode, the namenode would 
 come up fine.  In 2.6, the preferred block size is required be  0. Because 
 of this change, the image that worked with 2.5 may not work with 2.6.
 If a cluster ran a version of hadoop earlier than 2.1.0-beta before, it is 
 under this risk even if it worked fine with 2.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7788) Post-2.6 namenode may not start up with an image containing inodes created with an old release.

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328124#comment-14328124
 ] 

Hadoop QA commented on HDFS-7788:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699735/rushabh.patch
  against trunk revision d49ae72.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9621//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9621//console

This message is automatically generated.

 Post-2.6 namenode may not start up with an image containing inodes created 
 with an old release.
 ---

 Key: HDFS-7788
 URL: https://issues.apache.org/jira/browse/HDFS-7788
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Kihwal Lee
Assignee: Rushabh S Shah
Priority: Blocker
 Attachments: HDFS-7788-binary.patch, rushabh.patch


 Before HDFS-4305, which was fixed in 2.1.0-beta, clients could specify 
 arbitrarily small preferred block size for a file including 0. This was 
 normally done by faulty clients or failed creates, but it was possible.
 Until 2.5, reading a fsimage containing inodes with 0 byte preferred block 
 size was allowed. So if a fsimage contained such an inode, the namenode would 
 come up fine.  In 2.6, the preferred block size is required be  0. Because 
 of this change, the image that worked with 2.5 may not work with 2.6.
 If a cluster ran a version of hadoop earlier than 2.1.0-beta before, it is 
 under this risk even if it worked fine with 2.5.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327273#comment-14327273
 ] 

Hudson commented on HDFS-7656:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #843 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/843/])
HDFS-7656. Expose truncate API for HDFS httpfs. (yliu) (yliu: rev 
2fd02afeca3710f487b6a039a65c1a666322b229)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java


 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327278#comment-14327278
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #843 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/843/])
HDFS-7808. Remove obsolete -ns options in in DFSHAAdmin.java. Contributed by 
Arshad Mohammad. (wheat9: rev 9a3e29208740da94d0cca5bb1c8163bea60d1387)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327282#comment-14327282
 ] 

Hudson commented on HDFS-7804:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #843 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/843/])
HDFS-7804. correct the haadmin command usage in 
#HDFSHighAvailabilityWithQJM.html (Brahma Reddy Battula via umamahesh) 
(umamahesh: rev 2ecea5ab741f62e8fd0449251f2ea4a5759f4e77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804-003.patch, 
 HDFS-7804-branch-2-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327276#comment-14327276
 ] 

Hudson commented on HDFS-7772:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #843 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/843/])
HDFS-7772. Document hdfs balancer -exclude/-include option in 
HDFSCommands.html. Contributed by Xiaoyu Yao. (cnauroth: rev 
2aa9979a713ab79853885264ad7739c48226aaa4)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HDFS-7656:
-
  Resolution: Fixed
   Fix Version/s: (was: 3.0.0)
  2.7.0
Target Version/s: 2.7.0  (was: 3.0.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks [~shv] for the review, I committed it to trunk and branch-2.
I could address [~tucu00]'s comments in follow-up if he has :)

 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7813) TestDFSHAAdminMiniCluster#testFencer testcase is failing ferquently

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327204#comment-14327204
 ] 

Hadoop QA commented on HDFS-7813:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699618/HDFS-7813-001.patch
  against trunk revision 946456c.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9618//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9618//console

This message is automatically generated.

 TestDFSHAAdminMiniCluster#testFencer testcase is failing ferquently
 ---

 Key: HDFS-7813
 URL: https://issues.apache.org/jira/browse/HDFS-7813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7813-001.patch


 Following is the failure trace.
 {code}
 java.lang.AssertionError: expected:0 but was:-1
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer(TestDFSHAAdminMiniCluster.java:163)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327292#comment-14327292
 ] 

Hudson commented on HDFS-7772:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #109 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/109/])
HDFS-7772. Document hdfs balancer -exclude/-include option in 
HDFSCommands.html. Contributed by Xiaoyu Yao. (cnauroth: rev 
2aa9979a713ab79853885264ad7739c48226aaa4)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327298#comment-14327298
 ] 

Hudson commented on HDFS-7804:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #109 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/109/])
HDFS-7804. correct the haadmin command usage in 
#HDFSHighAvailabilityWithQJM.html (Brahma Reddy Battula via umamahesh) 
(umamahesh: rev 2ecea5ab741f62e8fd0449251f2ea4a5759f4e77)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md


 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804-003.patch, 
 HDFS-7804-branch-2-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327294#comment-14327294
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #109 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/109/])
HDFS-7808. Remove obsolete -ns options in in DFSHAAdmin.java. Contributed by 
Arshad Mohammad. (wheat9: rev 9a3e29208740da94d0cca5bb1c8163bea60d1387)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327289#comment-14327289
 ] 

Hudson commented on HDFS-7656:
--

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #109 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/109/])
HDFS-7656. Expose truncate API for HDFS httpfs. (yliu) (yliu: rev 
2fd02afeca3710f487b6a039a65c1a666322b229)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java


 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327166#comment-14327166
 ] 

Hudson commented on HDFS-7656:
--

FAILURE: Integrated in Hadoop-trunk-Commit #7155 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7155/])
HDFS-7656. Expose truncate API for HDFS httpfs. (yliu) (yliu: rev 
2fd02afeca3710f487b6a039a65c1a666322b229)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java


 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7731) Can not start HA namenode with security enabled

2015-02-19 Thread donhoff_h (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327395#comment-14327395
 ] 

donhoff_h commented on HDFS-7731:
-

Hi, Zheng Kai and surendra

The problem is really caused by the HTTP Spnego principal. I did not know that 
it must be the capitalized.

Thanks very much 

 Can not start HA namenode with security enabled
 ---

 Key: HDFS-7731
 URL: https://issues.apache.org/jira/browse/HDFS-7731
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ha, journal-node, namenode, security
Affects Versions: 2.5.2
 Environment: Redhat6.2 Hadoop2.5.2
Reporter: donhoff_h
  Labels: hadoop, security

 I am converting a secure non-HA cluster into a secure HA cluster. After the 
 configuration and started all the journalnodes, I executed the following 
 commands on the original NameNode:
 1. hdfs name -initializeSharedEdits   #this step succeeded
 2. hadoop-daemon.sh start namenode  # this step failed.
 So the namenode can not be started. I verified that my principals are right. 
 And if I change back to the secure non-HA mode, the namenode can be started.
 The namenode log just reported the following errors and I could not find the 
 reason according to this log:
 2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
 Start loading edits file 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,154 ERROR 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
 initializing 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 java.io.IOException: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Server not 
 found in Kerberos database (7) - UNKNOWN_SERVER)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 

[jira] [Updated] (HDFS-7360) Test libhdfs3 against MiniDFSCluster

2015-02-19 Thread Zhanwei Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhanwei Wang updated HDFS-7360:
---
Attachment: HDFS-7360-pnative.002.patch

 Test libhdfs3 against MiniDFSCluster
 

 Key: HDFS-7360
 URL: https://issues.apache.org/jira/browse/HDFS-7360
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Zhanwei Wang
Priority: Critical
 Attachments: HDFS-7360-pnative.002.patch, HDFS-7360.patch


 Currently the branch has enough code to interact with HDFS servers. We should 
 test the code against MiniDFSCluster to ensure the correctness of the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7731) Can not start HA namenode with security enabled

2015-02-19 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327398#comment-14327398
 ] 

Kai Zheng commented on HDFS-7731:
-

Glad you get it work.

 Can not start HA namenode with security enabled
 ---

 Key: HDFS-7731
 URL: https://issues.apache.org/jira/browse/HDFS-7731
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ha, journal-node, namenode, security
Affects Versions: 2.5.2
 Environment: Redhat6.2 Hadoop2.5.2
Reporter: donhoff_h
Assignee: donhoff_h
  Labels: hadoop, security

 I am converting a secure non-HA cluster into a secure HA cluster. After the 
 configuration and started all the journalnodes, I executed the following 
 commands on the original NameNode:
 1. hdfs name -initializeSharedEdits   #this step succeeded
 2. hadoop-daemon.sh start namenode  # this step failed.
 So the namenode can not be started. I verified that my principals are right. 
 And if I change back to the secure non-HA mode, the namenode can be started.
 The namenode log just reported the following errors and I could not find the 
 reason according to this log:
 2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
 Start loading edits file 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,154 ERROR 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
 initializing 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 java.io.IOException: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Server not 
 found in Kerberos database (7) - UNKNOWN_SERVER)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:184)
   at 
 

[jira] [Resolved] (HDFS-7731) Can not start HA namenode with security enabled

2015-02-19 Thread donhoff_h (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

donhoff_h resolved HDFS-7731.
-
Resolution: Done

 Can not start HA namenode with security enabled
 ---

 Key: HDFS-7731
 URL: https://issues.apache.org/jira/browse/HDFS-7731
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ha, journal-node, namenode, security
Affects Versions: 2.5.2
 Environment: Redhat6.2 Hadoop2.5.2
Reporter: donhoff_h
Assignee: donhoff_h
  Labels: hadoop, security

 I am converting a secure non-HA cluster into a secure HA cluster. After the 
 configuration and started all the journalnodes, I executed the following 
 commands on the original NameNode:
 1. hdfs name -initializeSharedEdits   #this step succeeded
 2. hadoop-daemon.sh start namenode  # this step failed.
 So the namenode can not be started. I verified that my principals are right. 
 And if I change back to the secure non-HA mode, the namenode can be started.
 The namenode log just reported the following errors and I could not find the 
 reason according to this log:
 2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
 Start loading edits file 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,154 ERROR 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
 initializing 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 java.io.IOException: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Server not 
 found in Kerberos database (7) - UNKNOWN_SERVER)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:184)
   at 
 

[jira] [Assigned] (HDFS-7731) Can not start HA namenode with security enabled

2015-02-19 Thread donhoff_h (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

donhoff_h reassigned HDFS-7731:
---

Assignee: donhoff_h

 Can not start HA namenode with security enabled
 ---

 Key: HDFS-7731
 URL: https://issues.apache.org/jira/browse/HDFS-7731
 Project: Hadoop HDFS
  Issue Type: Task
  Components: ha, journal-node, namenode, security
Affects Versions: 2.5.2
 Environment: Redhat6.2 Hadoop2.5.2
Reporter: donhoff_h
Assignee: donhoff_h
  Labels: hadoop, security

 I am converting a secure non-HA cluster into a secure HA cluster. After the 
 configuration and started all the journalnodes, I executed the following 
 commands on the original NameNode:
 1. hdfs name -initializeSharedEdits   #this step succeeded
 2. hadoop-daemon.sh start namenode  # this step failed.
 So the namenode can not be started. I verified that my principals are right. 
 And if I change back to the secure non-HA mode, the namenode can be started.
 The namenode log just reported the following errors and I could not find the 
 reason according to this log:
 2015-02-03 17:42:06,020 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: 
 Start loading edits file 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3,
  
 http://bgdt01.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,024 INFO 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: Fast-forwarding 
 stream 
 'http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3'
  to transaction ID 68994
 2015-02-03 17:42:06,154 ERROR 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream: caught exception 
 initializing 
 http://bgdt04.dev.hrb:8480/getJournal?jid=bgdt-dev-hrbsegmentTxId=68994storageInfo=-57%3A876630880%3A0%3ACID-ea4c77aa-882d-4adf-a347-42f1344421f3
 java.io.IOException: 
 org.apache.hadoop.security.authentication.client.AuthenticationException: 
 GSSException: No valid credentials provided (Mechanism level: Server not 
 found in Kerberos database (7) - UNKNOWN_SERVER)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:464)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog$1.run(EditLogFileInputStream.java:456)
   at java.security.AccessController.doPrivileged(Native Method)
   at javax.security.auth.Subject.doAs(Subject.java:415)
   at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsUser(SecurityUtil.java:444)
   at 
 org.apache.hadoop.security.SecurityUtil.doAsCurrentUser(SecurityUtil.java:438)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream$URLLog.getInputStream(EditLogFileInputStream.java:455)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.init(EditLogFileInputStream.java:141)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOpImpl(EditLogFileInputStream.java:192)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogFileInputStream.nextOp(EditLogFileInputStream.java:250)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
   at 
 org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
   at 
 org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:184)
   at 
 

[jira] [Commented] (HDFS-7019) Add unit test for libhdfs3

2015-02-19 Thread Zhanwei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327417#comment-14327417
 ] 

Zhanwei Wang commented on HDFS-7019:


Hi [~thanhdo]

We should reuse the existing unit tests, this jira is used to review the unit 
tests.

 Add unit test for libhdfs3
 --

 Key: HDFS-7019
 URL: https://issues.apache.org/jira/browse/HDFS-7019
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7019.patch


 Add unit test for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7360) Test libhdfs3 against MiniDFSCluster

2015-02-19 Thread Zhanwei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327414#comment-14327414
 ] 

Zhanwei Wang commented on HDFS-7360:


In the new patch, I integrate libhdfs3 to hadoop maven build and enable 
function test.

To build libhdfs3 and run libhdfs3 function test, run {{mvn install -Pnative 
-Drequire.libhdfs3=true}}
To build libhdfs3 only, run {{mvn install -Pnative -Drequire.libhdfs3=true 
-DskipTests}}
{{-Dlibhdfs3.dependencies=/path/to/dep1:/path/to/dep2}} option can be used to 
specify the directories of dependencies.

libhdfs3 function test is running now against MiniDFSCluster. So after this 
commit, we can setup the CI for libhdfs3 and run the test for  each commit.

 Test libhdfs3 against MiniDFSCluster
 

 Key: HDFS-7360
 URL: https://issues.apache.org/jira/browse/HDFS-7360
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Haohui Mai
Assignee: Zhanwei Wang
Priority: Critical
 Attachments: HDFS-7360-pnative.002.patch, HDFS-7360.patch


 Currently the branch has enough code to interact with HDFS servers. We should 
 test the code against MiniDFSCluster to ensure the correctness of the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327617#comment-14327617
 ] 

Hudson commented on HDFS-7772:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2060 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2060/])
HDFS-7772. Document hdfs balancer -exclude/-include option in 
HDFSCommands.html. Contributed by Xiaoyu Yao. (cnauroth: rev 
2aa9979a713ab79853885264ad7739c48226aaa4)
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327614#comment-14327614
 ] 

Hudson commented on HDFS-7656:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2060 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2060/])
HDFS-7656. Expose truncate API for HDFS httpfs. (yliu) (yliu: rev 
2fd02afeca3710f487b6a039a65c1a666322b229)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java


 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327623#comment-14327623
 ] 

Hudson commented on HDFS-7804:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2060 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2060/])
HDFS-7804. correct the haadmin command usage in 
#HDFSHighAvailabilityWithQJM.html (Brahma Reddy Battula via umamahesh) 
(umamahesh: rev 2ecea5ab741f62e8fd0449251f2ea4a5759f4e77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804-003.patch, 
 HDFS-7804-branch-2-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327619#comment-14327619
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2060 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2060/])
HDFS-7808. Remove obsolete -ns options in in DFSHAAdmin.java. Contributed by 
Arshad Mohammad. (wheat9: rev 9a3e29208740da94d0cca5bb1c8163bea60d1387)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7019) Add unit test for libhdfs3

2015-02-19 Thread Thanh Do (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327928#comment-14327928
 ] 

Thanh Do commented on HDFS-7019:


Oh, my bad. I didn't realize that there was a patch for this.

Anyway, some of the existing tests will fail with the windows build support, 
hence they need be change accordingly. I'll open another Jira for this.

 Add unit test for libhdfs3
 --

 Key: HDFS-7019
 URL: https://issues.apache.org/jira/browse/HDFS-7019
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Zhanwei Wang
 Attachments: HDFS-7019.patch


 Add unit test for libhdfs3



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327949#comment-14327949
 ] 

Arpit Agarwal commented on HDFS-7806:
-

Thanks for taking care of this Xiaoyu. Suggest simplifying the comment in 
hdfs.StorageType as follows:

{code}
* This class has been deprecated. Applications must use by 
org.apache.hadoop.fs.StorageType instead.
{code}

+1 otherwise, pending Jenkins.


 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7773) Additional metrics in HDFS to be accessed via jmx.

2015-02-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327959#comment-14327959
 ] 

Chris Nauroth commented on HDFS-7773:
-

The failure of {{TestDFSHAAdminMiniCluster}} is unrelated, and it's tracked 
separately in HDFS-7813.

The other failures appear to be the flaky Jenkins behavior we've seen lately, 
resulting in {{NoClassDefFoundError}}.  They don't repro locally for me.

Anu, this is all ready to commit to trunk, but I just remembered we'll need a 
separate patch file for branch-2.  The only difference is that instead of 
modifying Metrics.md, you'll need to make the equivalent changes in 
Metrics.apt.vm.  The documentation only just recently converted to markdown.  
That conversion was only done on trunk, hence the need for a separate patch.  
Thanks!

 Additional metrics in HDFS to be accessed via jmx.
 --

 Key: HDFS-7773
 URL: https://issues.apache.org/jira/browse/HDFS-7773
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Reporter: Anu Engineer
Assignee: Anu Engineer
 Attachments: hdfs-7773.001.patch, hdfs-7773.002.patch, 
 hdfs-7773.003.patch


 We would like to have the following metrics added to DataNode and name node 
 this to improve Ambari dashboard
 1) DN disk i/o utilization
 2) DN network i/o utilization
 3) Namenode read operations 
 4) Namenode write operations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7806:
-
Status: Patch Available  (was: Open)

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7806:
-
Attachment: HDFS-7806.00.patch

Attach an initial patch that moves StorageType from o.a.h.hdfs to o.a.h.fs. A 
StorageType enum stub is kept to direct new applications to use the StorageType 
under o.a.h.fs and update the imports.

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7768) Separate Platform specific funtions

2015-02-19 Thread Thanh Do (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thanh Do resolved HDFS-7768.

Resolution: Invalid

Overlapped with HDFS-7188

 Separate Platform specific funtions
 ---

 Key: HDFS-7768
 URL: https://issues.apache.org/jira/browse/HDFS-7768
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: hdfs-client
Reporter: Thanh Do
Assignee: Thanh Do

 Current code has several platform-specific parts (e.g., get environment 
 variables, get local addresses, print stack). We should separate these parts 
 into platform folders.
 This issue will do just that. Posix systems will be able to compile 
 successfully. Windows will fail to compile due to unimplemented parts. The 
 implementation for the Windows parts will be handle at HDFS-7188 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328200#comment-14328200
 ] 

Hadoop QA commented on HDFS-7806:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699716/HDFS-7806.00.patch
  against trunk revision f0f2992.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 34 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.security.token.delegation.web.TestWebDelegationToken
  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.http.TestHttpCookieFlag

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9619//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9619//console

This message is automatically generated.

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7435) PB encoding of block reports is very inefficient

2015-02-19 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7435:

Attachment: HDFS-7435.002.patch

Rebase the patch.

bq. I'm trying to performance test a patch that internally segments the 
BlockListAsLongs and correctly outputs the byte buffer.

[~daryn], do you still plan to post your updated patch and performance test 
results? I do not know the details of your new proposal here. But sounds like 
you will also segment the BlockListAsLongs? 

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-02-19 Thread Gera Shegalov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gera Shegalov updated HDFS-7439:

Assignee: Takanobu Asanuma

 Add BlockOpResponseProto's message to DFSClient's exception message
 ---

 Key: HDFS-7439
 URL: https://issues.apache.org/jira/browse/HDFS-7439
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Takanobu Asanuma
Priority: Minor

 When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
 if DFSClient can add BlockOpResponseProto's message to the exception message 
 applications will get. For example, instead of
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp());
 {noformat}
 It could be,
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp()
 + , status message  + status.getMessage());
 {noformat}
 We might want to check out all the references to BlockOpResponseProto in 
 DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7814) Fix usage string of storageType parameter for dfsadmin -setSpaceQuota/clrSpaceQuota

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328552#comment-14328552
 ] 

Hadoop QA commented on HDFS-7814:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699792/HDFS-7814.00.patch
  against trunk revision c0d9b93.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.balancer.TestBalancer
  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9624//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9624//console

This message is automatically generated.

 Fix usage string of storageType parameter for dfsadmin 
 -setSpaceQuota/clrSpaceQuota
 -

 Key: HDFS-7814
 URL: https://issues.apache.org/jira/browse/HDFS-7814
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7814.00.patch


 This was found when I'm documenting the quota by storage type feature. The 
 current usage string to set/clear quota by storage type which put the 
 -storageType prameter after dirnames is incorrect.
 hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota dirname...dirname 
 -storageType storagetype. 
 The correct one should be:
 hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota [-storageType 
 storagetype] dirname...dirname
 I will post the fix shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7700) Update document for quota by storage type

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7700:
-
Attachment: HDFS-7700.00.patch

Post an initial patch for trunk to update quota by storage type document. I 
will add a branch-2 patch once the content is reviewed. 

 Update document for quota by storage type
 -

 Key: HDFS-7700
 URL: https://issues.apache.org/jira/browse/HDFS-7700
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7700.00.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7700) Update document for quota by storage type

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7700:
-
Status: Patch Available  (was: Open)

 Update document for quota by storage type
 -

 Key: HDFS-7700
 URL: https://issues.apache.org/jira/browse/HDFS-7700
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7700.00.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328562#comment-14328562
 ] 

Xiaoyu Yao commented on HDFS-7806:
--

For V2 patch, test results show no failure but Jenkins has -1 core tests. The 
test build failed in hadoop-hdfs-project/hadoop-hdfs. 

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch, 
 HDFS-7806.02.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328566#comment-14328566
 ] 

Hadoop QA commented on HDFS-7435:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699754/HDFS-7435.002.patch
  against trunk revision c0d9b93.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHDFS
  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9625//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9625//console

This message is automatically generated.

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-02-19 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-7439:
---
Assignee: (was: Takanobu Asanuma)

 Add BlockOpResponseProto's message to DFSClient's exception message
 ---

 Key: HDFS-7439
 URL: https://issues.apache.org/jira/browse/HDFS-7439
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Priority: Minor

 When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
 if DFSClient can add BlockOpResponseProto's message to the exception message 
 applications will get. For example, instead of
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp());
 {noformat}
 It could be,
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp()
 + , status message  + status.getMessage());
 {noformat}
 We might want to check out all the references to BlockOpResponseProto in 
 DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-02-19 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma reassigned HDFS-7439:
--

Assignee: Takanobu Asanuma

 Add BlockOpResponseProto's message to DFSClient's exception message
 ---

 Key: HDFS-7439
 URL: https://issues.apache.org/jira/browse/HDFS-7439
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Assignee: Takanobu Asanuma
Priority: Minor

 When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
 if DFSClient can add BlockOpResponseProto's message to the exception message 
 applications will get. For example, instead of
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp());
 {noformat}
 It could be,
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp()
 + , status message  + status.getMessage());
 {noformat}
 We might want to check out all the references to BlockOpResponseProto in 
 DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7813) TestDFSHAAdminMiniCluster#testFencer testcase is failing ferquently

2015-02-19 Thread Rakesh R (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328543#comment-14328543
 ] 

Rakesh R commented on HDFS-7813:


Can anyone help review this. Thanks!

 TestDFSHAAdminMiniCluster#testFencer testcase is failing ferquently
 ---

 Key: HDFS-7813
 URL: https://issues.apache.org/jira/browse/HDFS-7813
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, test
Reporter: Rakesh R
Assignee: Rakesh R
 Attachments: HDFS-7813-001.patch


 Following is the failure trace.
 {code}
 java.lang.AssertionError: expected:0 but was:-1
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:555)
   at org.junit.Assert.assertEquals(Assert.java:542)
   at 
 org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster.testFencer(TestDFSHAAdminMiniCluster.java:163)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7308) DFSClient write packet size may 64kB

2015-02-19 Thread Takuya Fukudome (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328571#comment-14328571
 ] 

Takuya Fukudome commented on HDFS-7308:
---

Hi Nicholas. I want to work on this ticket. Can you assign to me? Thank you.

 DFSClient write packet size may  64kB
 --

 Key: HDFS-7308
 URL: https://issues.apache.org/jira/browse/HDFS-7308
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: hdfs-client
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor

 In DFSOutputStream.computePacketChunkSize(..),
 {code}
   private void computePacketChunkSize(int psize, int csize) {
 final int chunkSize = csize + getChecksumSize();
 chunksPerPacket = Math.max(psize/chunkSize, 1);
 packetSize = chunkSize*chunksPerPacket;
 if (DFSClient.LOG.isDebugEnabled()) {
   ...
 }
   }
 {code}
 We have the following
 || variables || usual values ||
 | psize | dfsClient.getConf().writePacketSize = 64kB |
 | csize | bytesPerChecksum = 512B |
 | getChecksumSize(), i.e. CRC size | 32B |
 | chunkSize = csize + getChecksumSize() | 544B (not a power of two) |
 | psize/chunkSize | 120.47 |
 | chunksPerPacket = max(psize/chunkSize, 1) | 120 |
 | packetSize = chunkSize*chunksPerPacket (not including header) | 65280B |
 | PacketHeader.PKT_MAX_HEADER_LEN | 33B |
 | actual packet size | 65280 + 33 = *65313*  65536 = 64k |
 It is fortunate that the usual packet size = 65313  64k although the 
 calculation above does not guarantee it always happens (e.g. if 
 PKT_MAX_HEADER_LEN=257, then actual packet size=65537  64k.)  We should fix 
 the computation in order to guarantee actual packet size  64k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7806:

Target Version/s: 2.7.0

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch, 
 HDFS-7806.02.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7806:

Status: Patch Available  (was: Open)

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch, 
 HDFS-7806.02.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7806:

Fix Version/s: (was: 2.7.0)

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch, 
 HDFS-7806.02.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDFS-7806:

Status: Open  (was: Patch Available)

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch, 
 HDFS-7806.02.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7770) Need document for storage type label of data node storage locations under dfs.data.dir

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7770:
-
Attachment: HDFS-7770.00.patch

Attach a trunk patch that covers document of storage type tag in 
dfs.datanode.data.dir for storage policies. I will add a branch-2 patch once 
the trunk patch is reviewed. 

 Need document for storage type label of data node storage locations under 
 dfs.data.dir
 --

 Key: HDFS-7770
 URL: https://issues.apache.org/jira/browse/HDFS-7770
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7770.00.patch


 HDFS-2832 enables support for heterogeneous storages in HDFS, which allows DN 
 as a collection of storages with different types. However, I can't find 
 document on how to label different storage types from the following two 
 documents. I found the information from the design spec. It will be good we 
 document this for admins and users to use the related Archival storage and 
 storage policy features. 
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
 This JIRA is opened to add document for the new storage type labels. 
 1. Add an example under ArchivalStorage.html#Configuration section:
 {code}
   property
 namedfs.data.dir/name
 value[DISK]file:///hddata/dn/disk0,  
 [SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0/value
   /property
 {code}
 2. Add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
 hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
 type is labeled in the data node storage location configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7770) Need document for storage type label of data node storage locations under dfs.data.dir

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7770:
-
Status: Patch Available  (was: Open)

 Need document for storage type label of data node storage locations under 
 dfs.data.dir
 --

 Key: HDFS-7770
 URL: https://issues.apache.org/jira/browse/HDFS-7770
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
 Attachments: HDFS-7770.00.patch


 HDFS-2832 enables support for heterogeneous storages in HDFS, which allows DN 
 as a collection of storages with different types. However, I can't find 
 document on how to label different storage types from the following two 
 documents. I found the information from the design spec. It will be good we 
 document this for admins and users to use the related Archival storage and 
 storage policy features. 
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html
 http://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
 This JIRA is opened to add document for the new storage type labels. 
 1. Add an example under ArchivalStorage.html#Configuration section:
 {code}
   property
 namedfs.data.dir/name
 value[DISK]file:///hddata/dn/disk0,  
 [SSD]file:///hddata/dn/ssd0,[ARCHIVE]file:///hddata/dn/archive0/value
   /property
 {code}
 2. Add a short description of [DISK/SSD/ARCHIVE/RAM_DISK] options in 
 hdfs-default.xml#dfs.data.dir and document DISK as storage type if no storage 
 type is labeled in the data node storage location configuration. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7439) Add BlockOpResponseProto's message to DFSClient's exception message

2015-02-19 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328654#comment-14328654
 ] 

Takanobu Asanuma commented on HDFS-7439:


I'd like to try to do this issue. Please assign it to me, thank you.

 Add BlockOpResponseProto's message to DFSClient's exception message
 ---

 Key: HDFS-7439
 URL: https://issues.apache.org/jira/browse/HDFS-7439
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Ming Ma
Priority: Minor

 When (BlockOpResponseProto#getStatus() != SUCCESS), it helps with debugging 
 if DFSClient can add BlockOpResponseProto's message to the exception message 
 applications will get. For example, instead of
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp());
 {noformat}
 It could be,
 {noformat}
 throw new IOException(Got error for OP_READ_BLOCK, self=
 + peer.getLocalAddressString() + , remote=
 + peer.getRemoteAddressString() + , for file  + file
 + , for pool  + block.getBlockPoolId() +  block  
 + block.getBlockId() + _ + block.getGenerationStamp()
 + , status message  + status.getMessage());
 {noformat}
 We might want to check out all the references to BlockOpResponseProto in 
 DFSClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-02-19 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328256#comment-14328256
 ] 

Suresh Srinivas commented on HDFS-7435:
---

[~jingzhao], if you have a patch, can you please post it? In clusters with lot 
of small files and clusters with high density nodes, we have already seen 
issues related to large block reports. We can improve upon your patch in 
subsequent jiras. 

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328308#comment-14328308
 ] 

Hadoop QA commented on HDFS-7806:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699729/HDFS-7806.01.patch
  against trunk revision d49ae72.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 34 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestLeaseRecovery2
  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9620//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9620//console

This message is automatically generated.

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327446#comment-14327446
 ] 

Hudson commented on HDFS-7656:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2041/])
HDFS-7656. Expose truncate API for HDFS httpfs. (yliu) (yliu: rev 
2fd02afeca3710f487b6a039a65c1a666322b229)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java


 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327455#comment-14327455
 ] 

Hudson commented on HDFS-7804:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2041/])
HDFS-7804. correct the haadmin command usage in 
#HDFSHighAvailabilityWithQJM.html (Brahma Reddy Battula via umamahesh) 
(umamahesh: rev 2ecea5ab741f62e8fd0449251f2ea4a5759f4e77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804-003.patch, 
 HDFS-7804-branch-2-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327449#comment-14327449
 ] 

Hudson commented on HDFS-7772:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2041/])
HDFS-7772. Document hdfs balancer -exclude/-include option in 
HDFSCommands.html. Contributed by Xiaoyu Yao. (cnauroth: rev 
2aa9979a713ab79853885264ad7739c48226aaa4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327451#comment-14327451
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #2041 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2041/])
HDFS-7808. Remove obsolete -ns options in in DFSHAAdmin.java. Contributed by 
Arshad Mohammad. (wheat9: rev 9a3e29208740da94d0cca5bb1c8163bea60d1387)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327466#comment-14327466
 ] 

Hudson commented on HDFS-7772:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #100 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/100/])
HDFS-7772. Document hdfs balancer -exclude/-include option in 
HDFSCommands.html. Contributed by Xiaoyu Yao. (cnauroth: rev 
2aa9979a713ab79853885264ad7739c48226aaa4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-02-19 Thread Zhanwei Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327459#comment-14327459
 ] 

Zhanwei Wang commented on HDFS-6994:


libhdfs3 require explicit scoping enum, and according to 
https://gcc.gnu.org/projects/cxx0x.html, it is supported since gcc version 4.4. 
So the minimum version required of GCC is 4.4.

Removing exception needs a lot of work. We have to design a new error handling 
mechanism to indicate error, pass detailed and nested error message, retry and 
failover mechanism and propagate error between threads. It will change most of 
the code.

I do not think it is the high priority task. I suggest to finish the current 
work first and make libhdfs3 available for the users, and then refactor the 
code incrementally.  

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327468#comment-14327468
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #100 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/100/])
HDFS-7808. Remove obsolete -ns options in in DFSHAAdmin.java. Contributed by 
Arshad Mohammad. (wheat9: rev 9a3e29208740da94d0cca5bb1c8163bea60d1387)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327472#comment-14327472
 ] 

Hudson commented on HDFS-7804:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #100 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/100/])
HDFS-7804. correct the haadmin command usage in 
#HDFSHighAvailabilityWithQJM.html (Brahma Reddy Battula via umamahesh) 
(umamahesh: rev 2ecea5ab741f62e8fd0449251f2ea4a5759f4e77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804-003.patch, 
 HDFS-7804-branch-2-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327463#comment-14327463
 ] 

Hudson commented on HDFS-7656:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #100 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/100/])
HDFS-7656. Expose truncate API for HDFS httpfs. (yliu) (yliu: rev 
2fd02afeca3710f487b6a039a65c1a666322b229)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java


 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7772) Document hdfs balancer -exclude/-include option in HDFSCommands.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327484#comment-14327484
 ] 

Hudson commented on HDFS-7772:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #110 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/110/])
HDFS-7772. Document hdfs balancer -exclude/-include option in 
HDFSCommands.html. Contributed by Xiaoyu Yao. (cnauroth: rev 
2aa9979a713ab79853885264ad7739c48226aaa4)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/Balancer.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSCommands.md


 Document hdfs balancer -exclude/-include option in HDFSCommands.html
 

 Key: HDFS-7772
 URL: https://issues.apache.org/jira/browse/HDFS-7772
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: documentation
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-7772.0.patch, HDFS-7772.1.patch, 
 HDFS-7772.1.screen.png, HDFS-7772.2.patch, HDFS-7772.2.screen.png, 
 HDFS-7772.3.patch, HDFS-7772.branch2.0.patch


 hdfs balancer -exclude/-include option are displayed in the command line 
 help but not HTML documentation page. This JIRA is opened to add it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7804) haadmin command usage #HDFSHighAvailabilityWithQJM.html

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327490#comment-14327490
 ] 

Hudson commented on HDFS-7804:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #110 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/110/])
HDFS-7804. correct the haadmin command usage in 
#HDFSHighAvailabilityWithQJM.html (Brahma Reddy Battula via umamahesh) 
(umamahesh: rev 2ecea5ab741f62e8fd0449251f2ea4a5759f4e77)
* 
hadoop-hdfs-project/hadoop-hdfs/src/site/markdown/HDFSHighAvailabilityWithQJM.md
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 haadmin command usage #HDFSHighAvailabilityWithQJM.html
 ---

 Key: HDFS-7804
 URL: https://issues.apache.org/jira/browse/HDFS-7804
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7804-002.patch, HDFS-7804-003.patch, 
 HDFS-7804-branch-2-002.patch, HDFS-7804.patch


  *Currently it's given like following* 
  *{color:red}Usage: DFSHAAdmin [-ns nameserviceId]{color}* 
 [-transitionToActive serviceId]
 [-transitionToStandby serviceId]
 [-failover [--forcefence] [--forceactive] serviceId serviceId]
 [-getServiceState serviceId]
 [-checkHealth serviceId]
 [-help command]
  *Expected:* 
  *{color:green}hdfs hadmin{color}* 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7656) Expose truncate API for HDFS httpfs

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327481#comment-14327481
 ] 

Hudson commented on HDFS-7656:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #110 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/110/])
HDFS-7656. Expose truncate API for HDFS httpfs. (yliu) (yliu: rev 
2fd02afeca3710f487b6a039a65c1a666322b229)
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSParametersProvider.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/HttpFSServer.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/client/HttpFSFileSystem.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/client/BaseTestHttpFSWith.java


 Expose truncate API for HDFS httpfs
 ---

 Key: HDFS-7656
 URL: https://issues.apache.org/jira/browse/HDFS-7656
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Yi Liu
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7656.001.patch


 This JIRA is to expose truncate API for Web HDFS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7808) Remove obsolete -ns options in in DFSHAAdmin.java

2015-02-19 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14327486#comment-14327486
 ] 

Hudson commented on HDFS-7808:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #110 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/110/])
HDFS-7808. Remove obsolete -ns options in in DFSHAAdmin.java. Contributed by 
Arshad Mohammad. (wheat9: rev 9a3e29208740da94d0cca5bb1c8163bea60d1387)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdmin.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/DFSHAAdmin.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/tools/TestDFSHAAdminMiniCluster.java


 Remove obsolete -ns options in in DFSHAAdmin.java
 -

 Key: HDFS-7808
 URL: https://issues.apache.org/jira/browse/HDFS-7808
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Arshad Mohammad
Assignee: Arshad Mohammad
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7808-1.patch


 After HDFS-7324 fix following piece of code become unused. It should be 
 removed.
 {code}
 int i = 0;
 String cmd = argv[i++];
 if (-ns.equals(cmd)) {
   if (i == argv.length) {
 errOut.println(Missing nameservice ID);
 printUsage(errOut);
 return -1;
   }
   nameserviceId = argv[i++];
   if (i = argv.length) {
 errOut.println(Missing command);
 printUsage(errOut);
 return -1;
   }
   argv = Arrays.copyOfRange(argv, i, argv.length);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-02-19 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328269#comment-14328269
 ] 

Jing Zhao commented on HDFS-7435:
-

Yeah, the 002 patch is the new rebased patch.

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-02-19 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328351#comment-14328351
 ] 

Haohui Mai commented on HDFS-6994:
--

bq. That is interesting. Why do you feel libhdfs3 and the Java client cannot 
support thousands of files concurrently? What are you doing differently that 
you believe will be better for this application?

I should have worded it more precisely. It is not about can or can't but about 
whether it can be done efficiently. The current APIs of the Java client are 
thread-based, synchronous APIs. They are simpler to program but to reduce 
latency it requires creating one thread per stream. In resource-constrained 
environments (e.g., applications running inside a YARN container) it becomes an 
important concern as accessing thousands of files concurrently requires 
thousands of threads.

libhdfs / libhdfs3 suffer from the same problem as the APIs of libhdfs / 
libhdfs3 follow closely of the APIs of the Java client we have today.

Fundamentally it is an issue tied to the synchronous APIs but not to specific 
implementation. Alternatively, event-based, asynchronous APIs are harder to 
program but they can be implemented with bounded amount of resources. 
Applications that need to access thousands of files concurrently in 
resource-constrained environment can benefit from this.

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-02-19 Thread Demai Ni (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328368#comment-14328368
 ] 

Demai Ni commented on HDFS-6994:


[~wheat9] and [~wangzw],

bq. Note that libhdfs3 is always available to the power users – they can check 
out the code and compile it themselves. Given that I feel that there is no need 
rush declaring libhdfs3 is available without stabilizing the APIs first.

just to chime in here from a user perspective. My team used this libhdfs3 for 
POC since last Oct, and is considering it for our production line. We did 
compile the code ourselves and integrated with our code as [~wheat9] suggested, 
and tested intensively for the most common APIs: connection, open/close file, 
read, seek, etc..   However, to use this code in production, it is important 
for us to have the jira accepted by hdfs community. The reason is due to 
general business/development process/guideline. Hence, we really love to see 
this jira committed and ok to smaller changes later on API and error handling.

With that said, I am just a user and not expert in hdfs/hadoop area. If the 
design or code quality of the jira is not ready yet, certainly it should wait.

Demai 



 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7702) Move metadata across namenode - Effort to a real distributed namenode

2015-02-19 Thread Ray Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Zhang updated HDFS-7702:

Attachment: Overflow+Table+Design+–+Record+Moved+Namespace+Locations.pdf

Attached draft overflow table design.

 Move metadata across namenode - Effort to a real distributed namenode
 -

 Key: HDFS-7702
 URL: https://issues.apache.org/jira/browse/HDFS-7702
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Ray Zhang
Assignee: Ray Zhang
 Attachments: 
 DATABFDT-MetadataMovingToolDesignProposal-Efforttoarealdistributednamenode-050215-1415-202.pdf,
  Overflow+Table+Design+–+Record+Moved+Namespace+Locations.pdf


 Implement a tool can show in memory namespace tree structure with 
 weight(size) and a API can move metadata across different namenode. The 
 purpose is moving data efficiently and faster, without moving blocks on 
 datanode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7806:
-
Attachment: HDFS-7806.02.patch

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch, 
 HDFS-7806.02.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7814) Fix usage string of storageType parameter for dfsadmin -setSpaceQuota/clrSpaceQuota

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7814:
-
Attachment: HDFS-7814.00.patch

 Fix usage string of storageType parameter for dfsadmin 
 -setSpaceQuota/clrSpaceQuota
 -

 Key: HDFS-7814
 URL: https://issues.apache.org/jira/browse/HDFS-7814
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7814.00.patch


 This was found when I'm documenting the quota by storage type feature. The 
 current usage string to set/clear quota by storage type which put the 
 -storageType prameter after dirnames is incorrect.
 hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota dirname...dirname 
 -storageType storagetype. 
 The correct one should be:
 hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota [-storageType 
 storagetype] dirname...dirname
 I will post the fix shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7814) Fix usage string of storageType parameter for dfsadmin -setSpaceQuota/clrSpaceQuota

2015-02-19 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDFS-7814:
-
Status: Patch Available  (was: Open)

 Fix usage string of storageType parameter for dfsadmin 
 -setSpaceQuota/clrSpaceQuota
 -

 Key: HDFS-7814
 URL: https://issues.apache.org/jira/browse/HDFS-7814
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7814.00.patch


 This was found when I'm documenting the quota by storage type feature. The 
 current usage string to set/clear quota by storage type which put the 
 -storageType prameter after dirnames is incorrect.
 hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota dirname...dirname 
 -storageType storagetype. 
 The correct one should be:
 hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota [-storageType 
 storagetype] dirname...dirname
 I will post the fix shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328452#comment-14328452
 ] 

Hadoop QA commented on HDFS-7806:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699785/HDFS-7806.02.patch
  against trunk revision c0d9b93.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 34 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9623//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9623//console

This message is automatically generated.

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch, 
 HDFS-7806.02.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7740) Test truncate with DataNodes restarting

2015-02-19 Thread Konstantin Shvachko (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328470#comment-14328470
 ] 

Konstantin Shvachko commented on HDFS-7740:
---

Hey Yi. The test scenarios sound great.
# I don't think your new tests will work with the ones existing in 
TestFileTruncate, because you are restarting the whole cluster.
I would propose to put your new cases into a new file, smth, like 
TestTruncateDataNodeRestarting.
# It would be good to start the cluster once in @BeforeClass, and make sure all 
DNs are up after each test case. I think this is possible as you do not need to 
reformat the cluster after each test. If you need to reformat, then we should 
use try-catch to start / stop clusters.
# With DNs restarting one way to accelerate test running time is to shorten 
heartbeats, as we did in TestFileTruncate.

 Test truncate with DataNodes restarting
 ---

 Key: HDFS-7740
 URL: https://issues.apache.org/jira/browse/HDFS-7740
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Affects Versions: 2.7.0
Reporter: Konstantin Shvachko
Assignee: Yi Liu
 Fix For: 2.7.0

 Attachments: HDFS-7740.001.patch


 Add a test case, which ensures replica consistency when DNs are failing and 
 restarting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328282#comment-14328282
 ] 

Chris Nauroth commented on HDFS-7806:
-

I think we can safely delete {{org.apache.hadoop.hdfs.StorageType}}.  This type 
never would have been exposed in public APIs in any releases of Apache Hadoop.  
It's annotated {{Unstable}}.  The only possible usage dependency would have 
been indirectly through the string values configured in end users' storage 
policies.  We'll need to maintain that functionality, but it's not really a 
dependency at the code level.

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7435) PB encoding of block reports is very inefficient

2015-02-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328443#comment-14328443
 ] 

Hadoop QA commented on HDFS-7435:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12699754/HDFS-7435.002.patch
  against trunk revision d49ae72.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 11 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestMiniDFSCluster
  org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
  org.apache.hadoop.hdfs.TestBalancerBandwidth
  
org.apache.hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/9622//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/9622//console

This message is automatically generated.

 PB encoding of block reports is very inefficient
 

 Key: HDFS-7435
 URL: https://issues.apache.org/jira/browse/HDFS-7435
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode, namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HDFS-7435.000.patch, HDFS-7435.001.patch, 
 HDFS-7435.002.patch, HDFS-7435.patch


 Block reports are encoded as a PB repeating long.  Repeating fields use an 
 {{ArrayList}} with default capacity of 10.  A block report containing tens or 
 hundreds of thousand of longs (3 for each replica) is extremely expensive 
 since the {{ArrayList}} must realloc many times.  Also, decoding repeating 
 fields will box the primitive longs which must then be unboxed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7806) Refactor: move StorageType.java from hadoop-hdfs to hadoop-common

2015-02-19 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328310#comment-14328310
 ] 

Xiaoyu Yao commented on HDFS-7806:
--

Thanks [~cnauroth] for the review. Agree with your point regarding remove 
org.apache.hadoop.hdfs.StorageType and will update the patch.  

 Refactor: move StorageType.java from hadoop-hdfs to hadoop-common
 -

 Key: HDFS-7806
 URL: https://issues.apache.org/jira/browse/HDFS-7806
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.7.0

 Attachments: HDFS-7806.00.patch, HDFS-7806.01.patch


 To report per storage type quota and usage information from hadoop fs -count 
 -q or hdfs dfs -count -q, we need to migrate the StorageType definition 
 from hadoop-hdfs (org.apache.hadoop.hdfs) to 
 hadoop-common(org.apache.hadoop.fs) because the ContentSummary and 
 FileSystem#getContentSummary() are in org.apache.hadoop.fs package.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6994) libhdfs3 - A native C/C++ HDFS client

2015-02-19 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14328362#comment-14328362
 ] 

Haohui Mai commented on HDFS-6994:
--

bq. I do not think it is the high priority task. I suggest to finish the 
current work first and make libhdfs3 available for the users, and then refactor 
the code incrementally.

I'm concerned about this. What are the guarantees of the APIs for the releases? 
Are the APIs / ABIs going to be compatible once we remove exceptions in later 
versions? Can the user simply do a drop-in replacement to upgrade? For the part 
of libhdfs binding the answer might be yes, but my general impression is no due 
to the complexity of SEH on Windows and various quirks on the implementation of 
the C++ exceptions.

Note that libhdfs3 is always available to the power users -- they can check out 
the code and compile it themselves. Given that I feel that there is no need 
rush declaring libhdfs3 is available without stabilizing the APIs first.

 libhdfs3 - A native C/C++ HDFS client
 -

 Key: HDFS-6994
 URL: https://issues.apache.org/jira/browse/HDFS-6994
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: hdfs-client
Reporter: Zhanwei Wang
Assignee: Zhanwei Wang
 Attachments: HDFS-6994-rpc-8.patch, HDFS-6994.patch


 Hi All
 I just got the permission to open source libhdfs3, which is a native C/C++ 
 HDFS client based on Hadoop RPC protocol and HDFS Data Transfer Protocol.
 libhdfs3 provide the libhdfs style C interface and a C++ interface. Support 
 both HADOOP RPC version 8 and 9. Support Namenode HA and Kerberos 
 authentication.
 libhdfs3 is currently used by HAWQ of Pivotal
 I'd like to integrate libhdfs3 into HDFS source code to benefit others.
 You can find libhdfs3 code from github
 https://github.com/PivotalRD/libhdfs3
 http://pivotalrd.github.io/libhdfs3/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7814) Fix usage string of storageType parameter for dfsadmin -setSpaceQuota/clrSpaceQuota

2015-02-19 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-7814:


 Summary: Fix usage string of storageType parameter for dfsadmin 
-setSpaceQuota/clrSpaceQuota
 Key: HDFS-7814
 URL: https://issues.apache.org/jira/browse/HDFS-7814
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao
Priority: Minor


This was found when I'm documenting the quota by storage type feature. The 
current usage string to set/clear quota by storage type which put the 
-storageType prameter after dirnames is incorrect.
hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota dirname...dirname 
-storageType storagetype. 

The correct one should be:
hdfs dfsadmin -setSpaceQuota/clrSpaceQuota quota [-storageType 
storagetype] dirname...dirname

I will post the fix shortly.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)