[jira] [Updated] (HDFS-6874) Add GET_BLOCK_LOCATIONS operation to HttpFS

2014-12-09 Thread Gao Zhong Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gao Zhong Liang updated HDFS-6874:
--
Attachment: HDFS-6874-branch-2.6.0.patch

So sorry for the late response. I put up another separate patch for branch 
2.6.0.

 Add GET_BLOCK_LOCATIONS operation to HttpFS
 ---

 Key: HDFS-6874
 URL: https://issues.apache.org/jira/browse/HDFS-6874
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Gao Zhong Liang
Assignee: Gao Zhong Liang
 Attachments: HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch


 GET_BLOCK_LOCATIONS operation is missing in HttpFS, which is already 
 supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
 org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
 ...
  case GETFILEBLOCKLOCATIONS: {
 response = Response.status(Response.Status.BAD_REQUEST).build();
 break;
   }
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6874) Add GET_BLOCK_LOCATIONS operation to HttpFS

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239133#comment-14239133
 ] 

Hadoop QA commented on HDFS-6874:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685967/HDFS-6874-branch-2.6.0.patch
  against trunk revision db73cc9.

{color:red}-1 patch{color}.  Trunk compilation may be broken.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8968//console

This message is automatically generated.

 Add GET_BLOCK_LOCATIONS operation to HttpFS
 ---

 Key: HDFS-6874
 URL: https://issues.apache.org/jira/browse/HDFS-6874
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.1
Reporter: Gao Zhong Liang
Assignee: Gao Zhong Liang
 Attachments: HDFS-6874-branch-2.6.0.patch, HDFS-6874.patch


 GET_BLOCK_LOCATIONS operation is missing in HttpFS, which is already 
 supported in WebHDFS.  For the request of GETFILEBLOCKLOCATIONS in 
 org.apache.hadoop.fs.http.server.HttpFSServer, BAD_REQUEST is returned so far:
 ...
  case GETFILEBLOCKLOCATIONS: {
 response = Response.status(Response.Status.BAD_REQUEST).build();
 break;
   }
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6308) TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky

2014-12-09 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-6308:

Target Version/s: 2.7.0

 TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky
 

 Key: HDFS-6308
 URL: https://issues.apache.org/jira/browse/HDFS-6308
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: HDFS-6308.v1.patch


 Found this on pre-commit build of HDFS-6261
 {code}
 java.lang.AssertionError: Expected one valid and one invalid volume
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsError(TestDistributedFileSystem.java:837)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7463) Simplify FSNamesystem#getBlockLocationsUpdateTimes

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239142#comment-14239142
 ] 

Hadoop QA commented on HDFS-7463:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685873/HDFS-7463.005.patch
  against trunk revision 0ee4161.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 287 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.namenode.TestBackupNode

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8965//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8965//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8965//console

This message is automatically generated.

 Simplify FSNamesystem#getBlockLocationsUpdateTimes
 --

 Key: HDFS-7463
 URL: https://issues.apache.org/jira/browse/HDFS-7463
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7463.000.patch, HDFS-7463.001.patch, 
 HDFS-7463.002.patch, HDFS-7463.003.patch, HDFS-7463.004.patch, 
 HDFS-7463.005.patch


 Currently {{FSNamesystem#getBlockLocationsUpdateTimes}} holds the read lock 
 to access the blocks. It releases the read lock and then acquires the write 
 lock when it needs to update the access time of the {{INode}}.
 This jira proposes to move the responsibility of the latter steps to the 
 caller to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5574) Remove buffer copy in BlockReader.skip

2014-12-09 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5574:

Attachment: HDFS-5574.006.patch

rebase patch to trunk

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.006.patch, HDFS-5574.v1.patch, 
 HDFS-5574.v2.patch, HDFS-5574.v3.patch, HDFS-5574.v4.patch, HDFS-5574.v5.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5574) Remove buffer copy in BlockReader.skip

2014-12-09 Thread Binglin Chang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Binglin Chang updated HDFS-5574:

Target Version/s: 2.7.0

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.006.patch, HDFS-5574.v1.patch, 
 HDFS-5574.v2.patch, HDFS-5574.v3.patch, HDFS-5574.v4.patch, HDFS-5574.v5.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7481) Add ACL indicator to the Permission Denied exception.

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239195#comment-14239195
 ] 

Hadoop QA commented on HDFS-7481:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685943/HDFS-7481-002.patch
  against trunk revision 0ee4161.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 287 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8966//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8966//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8966//console

This message is automatically generated.

 Add ACL indicator to the Permission Denied exception.
 ---

 Key: HDFS-7481
 URL: https://issues.apache.org/jira/browse/HDFS-7481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Attachments: HDFS-7481-001.patch, HDFS-7481-002.patch


 As mentioned in comment in HDFS-7454 add an ACL indicator similar to ls 
 output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7499) Add NFSv4 + Kerberos / client authentication support

2014-12-09 Thread Hari Sekhon (JIRA)
Hari Sekhon created HDFS-7499:
-

 Summary: Add NFSv4 + Kerberos / client authentication support
 Key: HDFS-7499
 URL: https://issues.apache.org/jira/browse/HDFS-7499
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.4.0
 Environment: HDP2.1
Reporter: Hari Sekhon


We have a requirement for secure file share access to HDFS on a kerberized 
cluster.

This is spun off from HDFS-7488 where adding Kerberos to the front end client 
was considered, I believe this would require NFSv4 support?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7488) HDFS Windows CIFS Gateway

2014-12-09 Thread Hari Sekhon (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239227#comment-14239227
 ] 

Hari Sekhon commented on HDFS-7488:
---

Collin, I agree with a lot of what you said, feedback was as expected. I'm 
aware CIFS is non-trivial having spent a decade with Samba - I actually have 
successfully layered Samba over a MapR-FS NFS loopback mount point, but I don't 
want to get backed in to a corner on Hadoop platform choices just on this point.

The NFS gateway has kerberos support on the back end for accessing kerberized 
HDFS which I've used for loopback mount point and that works fine. I believe 
the gateway would need to add NFSv4 support for Kerberos?

I've just created HDFS-7499 to track the idea of Kerberos for client 
authentication and whatever extensions that will require.

Andrew, thanks very much for the WebDav solution. Would that be a performant 
enough given users want to bulk load data via the Windows mapped drive as well 
as run existing C# programs on data contained in HDFS?

I think we should leave this ticket open for native CIFS support somehow in 
future, either via a dedicated gateway the equivalent of the NFS gateway or 
finding a way for Samba to successfully layer over the existing NFS gateway and 
offload the CIFS specific bits to Samba.

 HDFS Windows CIFS Gateway
 -

 Key: HDFS-7488
 URL: https://issues.apache.org/jira/browse/HDFS-7488
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.4.0
 Environment: HDP 2.1
Reporter: Hari Sekhon

 Stakeholders are pressuring for native Windows file share access to our 
 Hadoop clusters.
 I've used NFS gateway several times and while it's theoretically viable for 
 users now UID mapping is implemented in 2.5... insecure NFS makes our fully 
 Kerberized clusters security pointless.
 We really need CIFS gateway access to enforce authentication which NFSv3 
 doesn't (NFSv4?).
 I've even tried Samba over NFS gateway loopback mount point (don't laugh - 
 they want it that badly), and enabled hdfs atime precision to an hour to 
 prevent FSNamesystem.setTimes() java exceptions in gw logs, but the NFS 
 server still doesn't like the Windows CIFS client actions:
 {code}2014-12-08 16:31:38,053 ERROR nfs3.RpcProgramNfs3 
 (RpcProgramNfs3.java:setattr(346)) - Setting file size is not supported when 
 setattr, fileId: 25597
 2014-12-08 16:31:38,065 INFO  nfs3.WriteManager 
 (WriteManager.java:handleWrite(136)) - No opened stream for fileId:25597
 2014-12-08 16:31:38,122 INFO  nfs3.OpenFileCtx 
 (OpenFileCtx.java:receivedNewWriteInternal(624)) - Have to change stable 
 write to unstable write:FILE_SYNC
 {code}
 A debug of the Samba server shows it's trying to set metadata timestamps 
 which hangs indefinitely, resulting in the creation of a zero byte file when 
 trying to copy a file in to HDFS /tmp via the Windows mapped drive.
 {code}
 ...
  smb_set_file_time: setting utimes to modified values.
 file_ntime: actime: Thu Jan  1 01:00:00 1970
 file_ntime: modtime: Mon Dec  8 16:31:38 2014
 file_ntime: ctime: Thu Jan  1 01:00:00 1970
 file_ntime: createtime: Thu Jan  1 01:00:00 1970
 {code}
 This is the traceback from NFS gw log when hdfs precision was set to 0:
 {code}org.apache.hadoop.ipc.RemoteException(java.io.IOException): Access time 
 for hdfs is not configured.  Please set dfs.namenode.accesstime.precision 
 configuration parameter.
 at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setTimes(FSNamesystem.java:1960)
 at 
 org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setTimes(NameNodeRpcServer.java:950)
 at 
 org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setTimes(ClientNamenodeProtocolServerSideTranslatorPB.java:833)
 at 
 org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
 at 
 org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
 at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
 ...
 {code}
 Regards,
 Hari Sekhon
 http://www.linkedin.com/in/harisekhon



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239253#comment-14239253
 ] 

Vinayakumar B commented on HDFS-7456:
-

Thanks [~cnauroth] for the detailed tests and comments. I will try and post 
patch soon

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7490) HDFS tests OOM on Java7+

2014-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-7490:
-
Status: Open  (was: Patch Available)

 HDFS tests OOM on Java7+
 

 Key: HDFS-7490
 URL: https://issues.apache.org/jira/browse/HDFS-7490
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Affects Versions: 3.0.0
 Environment: Jenkins on Java 7+
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch


 The HDFS tests are running out of memory with the switch to Java7; 
 HADOOP-11363 covers the patch; this is in HDFS to force test it there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7490) HDFS tests OOM on Java7+

2014-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-7490:
-
 Target Version/s: 2.7.0  (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   2.7.0
   Status: Patch Available  (was: Open)

 HDFS tests OOM on Java7+
 

 Key: HDFS-7490
 URL: https://issues.apache.org/jira/browse/HDFS-7490
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Affects Versions: 2.7.0
 Environment: Jenkins on Java 7+
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 The HDFS tests are running out of memory with the switch to Java7; 
 HADOOP-11363 covers the patch; this is in HDFS to force test it there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7490) HDFS tests OOM on Java7+

2014-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-7490:
-
Attachment: HADOOP-11363-002.patch

 HDFS tests OOM on Java7+
 

 Key: HDFS-7490
 URL: https://issues.apache.org/jira/browse/HDFS-7490
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Affects Versions: 2.7.0
 Environment: Jenkins on Java 7+
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 The HDFS tests are running out of memory with the switch to Java7; 
 HADOOP-11363 covers the patch; this is in HDFS to force test it there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7490) HDFS tests OOM on Java7+

2014-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-7490:
-
Status: Open  (was: Patch Available)

core patch is checked in; a trunk build should suffice to validate the fix

 HDFS tests OOM on Java7+
 

 Key: HDFS-7490
 URL: https://issues.apache.org/jira/browse/HDFS-7490
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Affects Versions: 2.7.0
 Environment: Jenkins on Java 7+
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 The HDFS tests are running out of memory with the switch to Java7; 
 HADOOP-11363 covers the patch; this is in HDFS to force test it there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HDFS-5578:
-
Affects Version/s: (was: 2.3.0)
   (was: 3.0.0)
   2.7.0
   Status: Open  (was: Patch Available)

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
 -

 Key: HDFS-5578
 URL: https://issues.apache.org/jira/browse/HDFS-5578
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 5578-branch-2.patch, 5578-branch-2.patch, 
 5578-trunk.patch, 5578-trunk.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239286#comment-14239286
 ] 

Hadoop QA commented on HDFS-5578:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12621432/5578-trunk.patch
  against trunk revision f71eb51.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8972//console

This message is automatically generated.

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
 -

 Key: HDFS-5578
 URL: https://issues.apache.org/jira/browse/HDFS-5578
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 5578-branch-2.patch, 5578-branch-2.patch, 
 5578-trunk.patch, 5578-trunk.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7490) HDFS tests OOM on Java7+

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239289#comment-14239289
 ] 

Hadoop QA commented on HDFS-7490:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685987/HADOOP-11363-002.patch
  against trunk revision db73cc9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8971//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8971//console

This message is automatically generated.

 HDFS tests OOM on Java7+
 

 Key: HDFS-7490
 URL: https://issues.apache.org/jira/browse/HDFS-7490
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Affects Versions: 2.7.0
 Environment: Jenkins on Java 7+
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 The HDFS tests are running out of memory with the switch to Java7; 
 HADOOP-11363 covers the patch; this is in HDFS to force test it there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7473) Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239302#comment-14239302
 ] 

Hudson commented on HDFS-7473:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #770 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/770/])
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA. (cnauroth: rev 
d555bb2120cb44d094546e6b6560926561876c10)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
 ---

 Key: HDFS-7473
 URL: https://issues.apache.org/jira/browse/HDFS-7473
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0, 2.5.2
Reporter: Jason Keller
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-7473-001.patch


 When setting dfs.namenode.fs-limits.max-directory-items to 0 in 
 hdfs-site.xml, the error java.lang.IllegalArgumentException: Cannot set 
 dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater 
 than 640 is produced.  However, the documentation shows that 0 is a 
 valid setting for dfs.namenode.fs-limits.max-directory-items, turning the 
 check off.
 Looking into the code in 
 hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
  shows that the culprit is
 Preconditions.checkArgument(maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, 
 Cannot set + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+  to a 
 value less than 0 or greater than  + MAX_DIR_ITEMS);
 This checks if maxDirItems is greater than 0.  Since 0 is not greater than 0, 
 it produces an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239303#comment-14239303
 ] 

Hudson commented on HDFS-7384:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #770 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/770/])
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B. (cnauroth: rev 
ffe942b82c1208bc7b22899da3a233944cb5ab52)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java


 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7486) Consolidate XAttr-related implementation into a single class

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239296#comment-14239296
 ] 

Hudson commented on HDFS-7486:
--

FAILURE: Integrated in Hadoop-Yarn-trunk #770 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/770/])
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Consolidate XAttr-related implementation into a single class
 

 Key: HDFS-7486
 URL: https://issues.apache.org/jira/browse/HDFS-7486
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7486.000.patch


 This jira proposes to consolidate XAttr-related implementation in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7498) Simplify the logic in INodesInPath

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239309#comment-14239309
 ] 

Hadoop QA commented on HDFS-7498:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685961/HDFS-7498.000.patch
  against trunk revision db73cc9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 352 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.datanode.TestMultipleNNDataBlockScanner
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots

  The following test timeouts occurred in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8967//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8967//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8967//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8967//console

This message is automatically generated.

 Simplify the logic in INodesInPath
 --

 Key: HDFS-7498
 URL: https://issues.apache.org/jira/browse/HDFS-7498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7498.000.patch


 Currently we have relatively complicated logic in INodesInPath:
 1) It can contain null elements in its INode array, and in 
 {{mkdirRecursively}} these null INodes are replaced with new directories.
 2) Operations like rename may also replace the inode in its INode array
 3) {{getINodes}} requires trimming inodes array if the INodesInPath is 
 derived from a dot-snapshot path
 4) A lot of methods directly use/manipulate its INode array
 We aim to simplify the logic of INodesInPath in this jira. Specifically, we 
 can
 make INodesInPath an immutable data structure and move the inode trimming 
 logic to path resolving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7473) Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239320#comment-14239320
 ] 

Hudson commented on HDFS-7473:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #34 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/34/])
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA. (cnauroth: rev 
d555bb2120cb44d094546e6b6560926561876c10)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
 ---

 Key: HDFS-7473
 URL: https://issues.apache.org/jira/browse/HDFS-7473
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0, 2.5.2
Reporter: Jason Keller
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-7473-001.patch


 When setting dfs.namenode.fs-limits.max-directory-items to 0 in 
 hdfs-site.xml, the error java.lang.IllegalArgumentException: Cannot set 
 dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater 
 than 640 is produced.  However, the documentation shows that 0 is a 
 valid setting for dfs.namenode.fs-limits.max-directory-items, turning the 
 check off.
 Looking into the code in 
 hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
  shows that the culprit is
 Preconditions.checkArgument(maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, 
 Cannot set + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+  to a 
 value less than 0 or greater than  + MAX_DIR_ITEMS);
 This checks if maxDirItems is greater than 0.  Since 0 is not greater than 0, 
 it produces an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239321#comment-14239321
 ] 

Hudson commented on HDFS-7384:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #34 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/34/])
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B. (cnauroth: rev 
ffe942b82c1208bc7b22899da3a233944cb5ab52)
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java


 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7486) Consolidate XAttr-related implementation into a single class

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239314#comment-14239314
 ] 

Hudson commented on HDFS-7486:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #34 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/34/])
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java


 Consolidate XAttr-related implementation into a single class
 

 Key: HDFS-7486
 URL: https://issues.apache.org/jira/browse/HDFS-7486
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7486.000.patch


 This jira proposes to consolidate XAttr-related implementation in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6308) TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239331#comment-14239331
 ] 

Hadoop QA commented on HDFS-6308:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12642622/HDFS-6308.v1.patch
  against trunk revision db73cc9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 287 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8969//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8969//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8969//console

This message is automatically generated.

 TestDistributedFileSystem#testGetFileBlockStorageLocationsError is flaky
 

 Key: HDFS-6308
 URL: https://issues.apache.org/jira/browse/HDFS-6308
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: HDFS-6308.v1.patch


 Found this on pre-commit build of HDFS-6261
 {code}
 java.lang.AssertionError: Expected one valid and one invalid volume
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.assertTrue(Assert.java:41)
   at 
 org.apache.hadoop.hdfs.TestDistributedFileSystem.testGetFileBlockStorageLocationsError(TestDistributedFileSystem.java:837)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5574) Remove buffer copy in BlockReader.skip

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239350#comment-14239350
 ] 

Hadoop QA commented on HDFS-5574:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685973/HDFS-5574.006.patch
  against trunk revision db73cc9.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 352 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSInputStream
  
org.apache.hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8970//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8970//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8970//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8970//console

This message is automatically generated.

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.006.patch, HDFS-5574.v1.patch, 
 HDFS-5574.v2.patch, HDFS-5574.v3.patch, HDFS-5574.v4.patch, HDFS-5574.v5.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7486) Consolidate XAttr-related implementation into a single class

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239429#comment-14239429
 ] 

Hudson commented on HDFS-7486:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #33 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/33/])
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 Consolidate XAttr-related implementation into a single class
 

 Key: HDFS-7486
 URL: https://issues.apache.org/jira/browse/HDFS-7486
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7486.000.patch


 This jira proposes to consolidate XAttr-related implementation in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239438#comment-14239438
 ] 

Hudson commented on HDFS-7384:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #33 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/33/])
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B. (cnauroth: rev 
ffe942b82c1208bc7b22899da3a233944cb5ab52)
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java


 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7473) Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239437#comment-14239437
 ] 

Hudson commented on HDFS-7473:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #33 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/33/])
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA. (cnauroth: rev 
d555bb2120cb44d094546e6b6560926561876c10)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
 ---

 Key: HDFS-7473
 URL: https://issues.apache.org/jira/browse/HDFS-7473
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0, 2.5.2
Reporter: Jason Keller
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-7473-001.patch


 When setting dfs.namenode.fs-limits.max-directory-items to 0 in 
 hdfs-site.xml, the error java.lang.IllegalArgumentException: Cannot set 
 dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater 
 than 640 is produced.  However, the documentation shows that 0 is a 
 valid setting for dfs.namenode.fs-limits.max-directory-items, turning the 
 check off.
 Looking into the code in 
 hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
  shows that the culprit is
 Preconditions.checkArgument(maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, 
 Cannot set + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+  to a 
 value less than 0 or greater than  + MAX_DIR_ITEMS);
 This checks if maxDirItems is greater than 0.  Since 0 is not greater than 0, 
 it produces an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7486) Consolidate XAttr-related implementation into a single class

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239440#comment-14239440
 ] 

Hudson commented on HDFS-7486:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1966 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1966/])
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java


 Consolidate XAttr-related implementation into a single class
 

 Key: HDFS-7486
 URL: https://issues.apache.org/jira/browse/HDFS-7486
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7486.000.patch


 This jira proposes to consolidate XAttr-related implementation in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7473) Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239448#comment-14239448
 ] 

Hudson commented on HDFS-7473:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1966 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1966/])
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA. (cnauroth: rev 
d555bb2120cb44d094546e6b6560926561876c10)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
 ---

 Key: HDFS-7473
 URL: https://issues.apache.org/jira/browse/HDFS-7473
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0, 2.5.2
Reporter: Jason Keller
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-7473-001.patch


 When setting dfs.namenode.fs-limits.max-directory-items to 0 in 
 hdfs-site.xml, the error java.lang.IllegalArgumentException: Cannot set 
 dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater 
 than 640 is produced.  However, the documentation shows that 0 is a 
 valid setting for dfs.namenode.fs-limits.max-directory-items, turning the 
 check off.
 Looking into the code in 
 hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
  shows that the culprit is
 Preconditions.checkArgument(maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, 
 Cannot set + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+  to a 
 value less than 0 or greater than  + MAX_DIR_ITEMS);
 This checks if maxDirItems is greater than 0.  Since 0 is not greater than 0, 
 it produces an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239449#comment-14239449
 ] 

Hudson commented on HDFS-7384:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1966 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1966/])
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B. (cnauroth: rev 
ffe942b82c1208bc7b22899da3a233944cb5ab52)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239454#comment-14239454
 ] 

Vinayakumar B commented on HDFS-7384:
-

Thanks a lot Chris for your patience and feedback :)

 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-12-09 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-6833:
-
Attachment: HDFS-6833-10.patch

Hi all,

I attach a new patch file which reflected the following approach.

{quote}
Let FsDatasetAsyncDiskService assumulated the list of blocks whose 
ReplicaFileDeleteTask is FINISHED to a certain size, then call the FsDataset 
api to remove them from FsDatasetImpl#deletingBlock.
{quote}

In this patch, if the size of the deleted block list is 1000, it calls 
FsDatasetImpl#deletingBlock.

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-10.patch, HDFS-6833-6-2.patch, 
 HDFS-6833-6-3.patch, HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7486) Consolidate XAttr-related implementation into a single class

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239504#comment-14239504
 ] 

Hudson commented on HDFS-7486:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #37 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/37/])
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Consolidate XAttr-related implementation into a single class
 

 Key: HDFS-7486
 URL: https://issues.apache.org/jira/browse/HDFS-7486
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7486.000.patch


 This jira proposes to consolidate XAttr-related implementation in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7473) Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239513#comment-14239513
 ] 

Hudson commented on HDFS-7473:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #37 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/37/])
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA. (cnauroth: rev 
d555bb2120cb44d094546e6b6560926561876c10)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
 ---

 Key: HDFS-7473
 URL: https://issues.apache.org/jira/browse/HDFS-7473
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0, 2.5.2
Reporter: Jason Keller
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-7473-001.patch


 When setting dfs.namenode.fs-limits.max-directory-items to 0 in 
 hdfs-site.xml, the error java.lang.IllegalArgumentException: Cannot set 
 dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater 
 than 640 is produced.  However, the documentation shows that 0 is a 
 valid setting for dfs.namenode.fs-limits.max-directory-items, turning the 
 check off.
 Looking into the code in 
 hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
  shows that the culprit is
 Preconditions.checkArgument(maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, 
 Cannot set + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+  to a 
 value less than 0 or greater than  + MAX_DIR_ITEMS);
 This checks if maxDirItems is greater than 0.  Since 0 is not greater than 0, 
 it produces an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239514#comment-14239514
 ] 

Hudson commented on HDFS-7384:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #37 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/37/])
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B. (cnauroth: rev 
ffe942b82c1208bc7b22899da3a233944cb5ab52)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java


 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-6833) DirectoryScanner should not register a deleting block with memory of DataNode

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239520#comment-14239520
 ] 

Hadoop QA commented on HDFS-6833:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686021/HDFS-6833-10.patch
  against trunk revision 82707b4.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8973//console

This message is automatically generated.

 DirectoryScanner should not register a deleting block with memory of DataNode
 -

 Key: HDFS-6833
 URL: https://issues.apache.org/jira/browse/HDFS-6833
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.0.0, 2.5.0, 2.5.1
Reporter: Shinichi Yamashita
Assignee: Shinichi Yamashita
Priority: Critical
 Attachments: HDFS-6833-10.patch, HDFS-6833-6-2.patch, 
 HDFS-6833-6-3.patch, HDFS-6833-6.patch, HDFS-6833-7-2.patch, 
 HDFS-6833-7.patch, HDFS-6833.8.patch, HDFS-6833.9.patch, HDFS-6833.patch, 
 HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch, HDFS-6833.patch


 When a block is deleted in DataNode, the following messages are usually 
 output.
 {code}
 2014-08-07 17:53:11,606 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:11,617 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 However, DirectoryScanner may be executed when DataNode deletes the block in 
 the current implementation. And the following messsages are output.
 {code}
 2014-08-07 17:53:30,519 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Scheduling blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
  for deletion
 2014-08-07 17:53:31,426 INFO 
 org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool 
 BP-1887080305-172.28.0.101-1407398838872 Total blocks: 1, missing metadata 
 files:0, missing block files:0, missing blocks in memory:1, mismatched 
 blocks:0
 2014-08-07 17:53:31,426 WARN 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added 
 missing block to memory FinalizedReplica, blk_1073741825_1001, FINALIZED
   getNumBytes() = 21230663
   getBytesOnDisk()  = 21230663
   getVisibleLength()= 21230663
   getVolume()   = /hadoop/data1/dfs/data/current
   getBlockFile()= 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
   unlinked  =false
 2014-08-07 17:53:31,531 INFO 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService:
  Deleted BP-1887080305-172.28.0.101-1407398838872 blk_1073741825_1001 file 
 /hadoop/data1/dfs/data/current/BP-1887080305-172.28.0.101-1407398838872/current/finalized/subdir0/subdir0/blk_1073741825
 {code}
 Deleting block information is registered in DataNode's memory.
 And when DataNode sends a block report, NameNode receives wrong block 
 information.
 For example, when we execute recommission or change the number of 
 replication, NameNode may delete the right block as ExcessReplicate by this 
 problem.
 And Under-Replicated Blocks and Missing Blocks occur.
 When DataNode run DirectoryScanner, DataNode should not register a deleting 
 block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239538#comment-14239538
 ] 

Hudson commented on HDFS-7384:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1987 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1987/])
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B. (cnauroth: rev 
ffe942b82c1208bc7b22899da3a233944cb5ab52)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java


 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7486) Consolidate XAttr-related implementation into a single class

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239528#comment-14239528
 ] 

Hudson commented on HDFS-7486:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1987 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1987/])
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Consolidate XAttr-related implementation into a single class
 

 Key: HDFS-7486
 URL: https://issues.apache.org/jira/browse/HDFS-7486
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7486.000.patch


 This jira proposes to consolidate XAttr-related implementation in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7473) Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239537#comment-14239537
 ] 

Hudson commented on HDFS-7473:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1987 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1987/])
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA. (cnauroth: rev 
d555bb2120cb44d094546e6b6560926561876c10)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml


 Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
 ---

 Key: HDFS-7473
 URL: https://issues.apache.org/jira/browse/HDFS-7473
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0, 2.5.2
Reporter: Jason Keller
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-7473-001.patch


 When setting dfs.namenode.fs-limits.max-directory-items to 0 in 
 hdfs-site.xml, the error java.lang.IllegalArgumentException: Cannot set 
 dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater 
 than 640 is produced.  However, the documentation shows that 0 is a 
 valid setting for dfs.namenode.fs-limits.max-directory-items, turning the 
 check off.
 Looking into the code in 
 hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
  shows that the culprit is
 Preconditions.checkArgument(maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, 
 Cannot set + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+  to a 
 value less than 0 or greater than  + MAX_DIR_ITEMS);
 This checks if maxDirItems is greater than 0.  Since 0 is not greater than 0, 
 it produces an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7490) HDFS tests OOM on Java7+

2014-12-09 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HDFS-7490.
--
   Resolution: Fixed
Fix Version/s: 2.7.0

 HDFS tests OOM on Java7+
 

 Key: HDFS-7490
 URL: https://issues.apache.org/jira/browse/HDFS-7490
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: build, test
Affects Versions: 2.7.0
 Environment: Jenkins on Java 7+
Reporter: Steve Loughran
Assignee: Steve Loughran
 Fix For: 2.7.0

 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 The HDFS tests are running out of memory with the switch to Java7; 
 HADOOP-11363 covers the patch; this is in HDFS to force test it there



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7449:
-
Attachment: HDFS-7449.003.patch

Uploaded a patch to fix the findbugs warnings.

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7499) Add NFSv4 + Kerberos / client authentication support

2014-12-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239698#comment-14239698
 ] 

Allen Wittenauer commented on HDFS-7499:


RFC-compliant NFSv3 can also do Kerberos, it is marked as an optional feature 
and many implementations do not implement it. Sun fixed this mistake; 
RFC-compliant NFSv4 requires Kerberos support.

That said, NFSv4 is a MUCH MUCH MUCH better fit for HDFS than NFSv3, due to the 
open and close semantics. 

 Add NFSv4 + Kerberos / client authentication support
 

 Key: HDFS-7499
 URL: https://issues.apache.org/jira/browse/HDFS-7499
 Project: Hadoop HDFS
  Issue Type: New Feature
Affects Versions: 2.4.0
 Environment: HDP2.1
Reporter: Hari Sekhon

 We have a requirement for secure file share access to HDFS on a kerberized 
 cluster.
 This is spun off from HDFS-7488 where adding Kerberos to the front end client 
 was considered, I believe this would require NFSv4 support?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7449:
-
Description: Add metrics to collect the NFSv3 handler operations, response 
time and etc.

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7473) Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239709#comment-14239709
 ] 

Hudson commented on HDFS-7473:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1967 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1967/])
HDFS-7473. Document setting dfs.namenode.fs-limits.max-directory-items to 0 is 
invalid. Contributed by Akira AJISAKA. (cnauroth: rev 
d555bb2120cb44d094546e6b6560926561876c10)
* hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java


 Document setting dfs.namenode.fs-limits.max-directory-items to 0 is invalid
 ---

 Key: HDFS-7473
 URL: https://issues.apache.org/jira/browse/HDFS-7473
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.4.0, 2.5.2
Reporter: Jason Keller
Assignee: Akira AJISAKA
  Labels: newbie
 Fix For: 2.7.0

 Attachments: HDFS-7473-001.patch


 When setting dfs.namenode.fs-limits.max-directory-items to 0 in 
 hdfs-site.xml, the error java.lang.IllegalArgumentException: Cannot set 
 dfs.namenode.fs-limits.max-directory-items to a value less than 0 or greater 
 than 640 is produced.  However, the documentation shows that 0 is a 
 valid setting for dfs.namenode.fs-limits.max-directory-items, turning the 
 check off.
 Looking into the code in 
 hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
  shows that the culprit is
 Preconditions.checkArgument(maxDirItems  0  maxDirItems = MAX_DIR_ITEMS, 
 Cannot set + DFSConfigKeys.DFS_NAMENODE_MAX_DIRECTORY_ITEMS_KEY+  to a 
 value less than 0 or greater than  + MAX_DIR_ITEMS);
 This checks if maxDirItems is greater than 0.  Since 0 is not greater than 0, 
 it produces an error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7486) Consolidate XAttr-related implementation into a single class

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239700#comment-14239700
 ] 

Hudson commented on HDFS-7486:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1967 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1967/])
HDFS-7486. Consolidate XAttr-related implementation into a single class. 
Contributed by Haohui Mai. (wheat9: rev 
6c5bbd7a42d1e8b4416fd8870fd60c67867b35c9)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSDirectory.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java


 Consolidate XAttr-related implementation into a single class
 

 Key: HDFS-7486
 URL: https://issues.apache.org/jira/browse/HDFS-7486
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7486.000.patch


 This jira proposes to consolidate XAttr-related implementation in 
 {{FSNamesystem}} and {{FSDirectory}} into a single class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7384) 'getfacl' command and 'getAclStatus' output should be in sync

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239710#comment-14239710
 ] 

Hudson commented on HDFS-7384:
--

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1967 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1967/])
HDFS-7384. getfacl command and getAclStatus output should be in sync. 
Contributed by Vinayakumar B. (cnauroth: rev 
ffe942b82c1208bc7b22899da3a233944cb5ab52)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/FSAclBaseTest.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclStatus.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirAclOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/tools/offlineImageViewer/FSImageLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java
* hadoop-hdfs-project/hadoop-hdfs/src/test/resources/testAclCLI.xml
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/permission/AclEntry.java
* hadoop-hdfs-project/hadoop-hdfs/src/main/proto/acl.proto
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/AclCommands.java
* hadoop-hdfs-project/hadoop-hdfs/src/site/apt/WebHDFS.apt.vm
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 'getfacl' command and 'getAclStatus' output should be in sync
 -

 Key: HDFS-7384
 URL: https://issues.apache.org/jira/browse/HDFS-7384
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Fix For: 2.7.0

 Attachments: HDFS-7384-001.patch, HDFS-7384-002.patch, 
 HDFS-7384-003.patch, HDFS-7384-004.patch, HDFS-7384-005.patch, 
 HDFS-7384-006.patch, HDFS-7384-007.patch, HDFS-7384-008.patch, 
 HDFS-7384-009.patch, HDFS-7384-010.patch


 *getfacl* command will print all the entries including basic and extended 
 entries, mask entries and effective permissions.
 But, *getAclStatus* FileSystem API will return only extended ACL entries set 
 by the user. But this will not include the mask entry as well as effective 
 permissions.
 To benefit the client using API, better to include 'mask' entry and effective 
 permissions in the return list of entries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239759#comment-14239759
 ] 

Hadoop QA commented on HDFS-7449:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686038/HDFS-7449.003.patch
  against trunk revision 82707b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 1 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8975//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8975//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8975//console

This message is automatically generated.

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7456:

Attachment: HDFS-7456-004.patch

Thanks again [~cnauroth] for the proposed tests. It really helped a lot to fix 
issues related to snapshots.
I have added all tests proposed and addressed comments.
Please review the updated patch.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7456:

Attachment: (was: HDFS-7456-004.patch)

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7456:

Attachment: HDFS-7456-004.patch

Removed unused import.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7456:

Attachment: HDFS-7456-004.patch

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HDFS-7456:

Attachment: (was: HDFS-7456-004.patch)

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7498) Simplify the logic in INodesInPath

2014-12-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239807#comment-14239807
 ] 

Jing Zhao commented on HDFS-7498:
-

Looks like the test failures of TestRenameWithSnapshots were caused by 
OutOfMemoryError. It can pass in my local run. I guess we may need to check 
the MAVEN_OPTS of the jenkins. The other two failed tests should be unrelated. 
The findbug warning is a known issue after bumping findbug version, and the 
patch does not cause new warnings.

 Simplify the logic in INodesInPath
 --

 Key: HDFS-7498
 URL: https://issues.apache.org/jira/browse/HDFS-7498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7498.000.patch


 Currently we have relatively complicated logic in INodesInPath:
 1) It can contain null elements in its INode array, and in 
 {{mkdirRecursively}} these null INodes are replaced with new directories.
 2) Operations like rename may also replace the inode in its INode array
 3) {{getINodes}} requires trimming inodes array if the INodesInPath is 
 derived from a dot-snapshot path
 4) A lot of methods directly use/manipulate its INode array
 We aim to simplify the logic of INodesInPath in this jira. Specifically, we 
 can
 make INodesInPath an immutable data structure and move the inode trimming 
 logic to path resolving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239814#comment-14239814
 ] 

Brandon Li commented on HDFS-7449:
--

The findbugs warning seems to be a findbugs+JAVA7 problem:
http://stackoverflow.com/questions/6599571/findbugs-gives-null-pointer-dereference-of-system-out-why


 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239814#comment-14239814
 ] 

Brandon Li edited comment on HDFS-7449 at 12/9/14 6:47 PM:
---

The findbugs warning seems to be a findbugs+JAVA7 problem:
http://stackoverflow.com/questions/6599571/findbugs-gives-null-pointer-dereference-of-system-out-why

Similar problem in other components, like: HADOOP-11370


was (Author: brandonli):
The findbugs warning seems to be a findbugs+JAVA7 problem:
http://stackoverflow.com/questions/6599571/findbugs-gives-null-pointer-dereference-of-system-out-why


 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7489) Slow FsVolumeList checkDirs can hang datanodes

2014-12-09 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239835#comment-14239835
 ] 

Colin Patrick McCabe commented on HDFS-7489:


Test failures are bogus.  I ran TestOfflineEditsViewer, 
TestRbwSpaceReservation,TestRenameWithSnapshots etc and got no failures.  The 
findbugs results are very puzzling... a bunch of null pointer dereference 
warnings for System.out and System.err (and not in code modified by this 
patch).  I think we can safely assume that System.out and System.err are not 
null-- no idea what went wrong with findbugs here.  Perhaps this is another JDK 
version upgrade issues.

+1.  Will commit in a few minutes.  Thanks, Noah.

 Slow FsVolumeList checkDirs can hang datanodes
 --

 Key: HDFS-7489
 URL: https://issues.apache.org/jira/browse/HDFS-7489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0, 2.6.0
Reporter: Noah Lorang
Priority: Critical
 Attachments: HDFS-7489-v1.patch, HDFS-7489-v2.patch, 
 HDFS-7489-v2.patch.1


 Starting after upgrading to 2.5.0 (CDH 5.2.1), we started to see datanodes 
 hanging their heartbeat and requests from clients. After some digging, I 
 identified the culprit as being the checkDiskError() triggered by catching 
 IOExceptions (in our case, SocketExceptions being triggered on one datanode 
 by ReplicaAlreadyExistsExceptions on another datanode).
 Thread dumps reveal that the checkDiskErrors() thread is holding a lock on 
 the FsVolumeList:
 {code}
 Thread-409 daemon prio=10 tid=0x7f4e50200800 nid=0x5b8e runnable 
 [0x7f4e2f855000]
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.list(Native Method)
 at java.io.File.list(File.java:973)
 at java.io.File.listFiles(File.java:1051)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:89)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:257)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:210)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:180)
 - locked 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:1396)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2832)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Other things would then lock the FsDatasetImpl while waiting for the 
 FsVolumeList, e.g.:
 {code}
 DataXceiver for client  at /10.10.0.52:46643 [Receiving block 
 BP-1573746465-127.0.1.1-1352244533715:blk_1073770670_106962574] daemon 
 prio=10 tid=0x7f4e55561000 nid=0x406d waiting for monitor entry 
 [0x7f4e3106d000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextVolume(FsVolumeList.java:64)
 - waiting to lock 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:927)
 - locked 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:101)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:167)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 That lock on the FsDatasetImpl then causes other threads to block:
 {code}
 Thread-127 daemon prio=10 tid=0x7f4e4c67d800 nid=0x2e02 waiting for 
 monitor entry [0x7f4e3339]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:228)
 - waiting to lock 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 

[jira] [Updated] (HDFS-7489) Incorrect locking in FsVolumeList#checkDirs can hang datanodes

2014-12-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7489:
---
Summary: Incorrect locking in FsVolumeList#checkDirs can hang datanodes  
(was: Slow FsVolumeList checkDirs can hang datanodes)

 Incorrect locking in FsVolumeList#checkDirs can hang datanodes
 --

 Key: HDFS-7489
 URL: https://issues.apache.org/jira/browse/HDFS-7489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0, 2.6.0
Reporter: Noah Lorang
Priority: Critical
 Attachments: HDFS-7489-v1.patch, HDFS-7489-v2.patch, 
 HDFS-7489-v2.patch.1


 Starting after upgrading to 2.5.0 (CDH 5.2.1), we started to see datanodes 
 hanging their heartbeat and requests from clients. After some digging, I 
 identified the culprit as being the checkDiskError() triggered by catching 
 IOExceptions (in our case, SocketExceptions being triggered on one datanode 
 by ReplicaAlreadyExistsExceptions on another datanode).
 Thread dumps reveal that the checkDiskErrors() thread is holding a lock on 
 the FsVolumeList:
 {code}
 Thread-409 daemon prio=10 tid=0x7f4e50200800 nid=0x5b8e runnable 
 [0x7f4e2f855000]
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.list(Native Method)
 at java.io.File.list(File.java:973)
 at java.io.File.listFiles(File.java:1051)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:89)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:257)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:210)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:180)
 - locked 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:1396)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2832)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Other things would then lock the FsDatasetImpl while waiting for the 
 FsVolumeList, e.g.:
 {code}
 DataXceiver for client  at /10.10.0.52:46643 [Receiving block 
 BP-1573746465-127.0.1.1-1352244533715:blk_1073770670_106962574] daemon 
 prio=10 tid=0x7f4e55561000 nid=0x406d waiting for monitor entry 
 [0x7f4e3106d000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextVolume(FsVolumeList.java:64)
 - waiting to lock 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:927)
 - locked 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:101)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:167)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 That lock on the FsDatasetImpl then causes other threads to block:
 {code}
 Thread-127 daemon prio=10 tid=0x7f4e4c67d800 nid=0x2e02 waiting for 
 monitor entry [0x7f4e3339]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:228)
 - waiting to lock 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:436)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:523)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:684)
 at 
 

[jira] [Commented] (HDFS-7489) Incorrect locking in FsVolumeList#checkDirs can hang datanodes

2014-12-09 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239844#comment-14239844
 ] 

Colin Patrick McCabe commented on HDFS-7489:


Ah.  The findbugs version was bumped recently in HADOOP-10476.  My uninformed 
guess is that we are using the old findbugs for previous findbugs warnings 
and the new one for patch findbugs, which of course makes every patch look bad?

 Incorrect locking in FsVolumeList#checkDirs can hang datanodes
 --

 Key: HDFS-7489
 URL: https://issues.apache.org/jira/browse/HDFS-7489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0, 2.6.0
Reporter: Noah Lorang
Priority: Critical
 Attachments: HDFS-7489-v1.patch, HDFS-7489-v2.patch, 
 HDFS-7489-v2.patch.1


 Starting after upgrading to 2.5.0 (CDH 5.2.1), we started to see datanodes 
 hanging their heartbeat and requests from clients. After some digging, I 
 identified the culprit as being the checkDiskError() triggered by catching 
 IOExceptions (in our case, SocketExceptions being triggered on one datanode 
 by ReplicaAlreadyExistsExceptions on another datanode).
 Thread dumps reveal that the checkDiskErrors() thread is holding a lock on 
 the FsVolumeList:
 {code}
 Thread-409 daemon prio=10 tid=0x7f4e50200800 nid=0x5b8e runnable 
 [0x7f4e2f855000]
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.list(Native Method)
 at java.io.File.list(File.java:973)
 at java.io.File.listFiles(File.java:1051)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:89)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:257)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:210)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:180)
 - locked 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:1396)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2832)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Other things would then lock the FsDatasetImpl while waiting for the 
 FsVolumeList, e.g.:
 {code}
 DataXceiver for client  at /10.10.0.52:46643 [Receiving block 
 BP-1573746465-127.0.1.1-1352244533715:blk_1073770670_106962574] daemon 
 prio=10 tid=0x7f4e55561000 nid=0x406d waiting for monitor entry 
 [0x7f4e3106d000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextVolume(FsVolumeList.java:64)
 - waiting to lock 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:927)
 - locked 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:101)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:167)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 That lock on the FsDatasetImpl then causes other threads to block:
 {code}
 Thread-127 daemon prio=10 tid=0x7f4e4c67d800 nid=0x2e02 waiting for 
 monitor entry [0x7f4e3339]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:228)
 - waiting to lock 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:436)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:523)
 at 
 

[jira] [Updated] (HDFS-7489) Incorrect locking in FsVolumeList#checkDirs can hang datanodes

2014-12-09 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HDFS-7489:
---
  Resolution: Fixed
   Fix Version/s: 2.6.1
Target Version/s: 2.6.1
  Status: Resolved  (was: Patch Available)

 Incorrect locking in FsVolumeList#checkDirs can hang datanodes
 --

 Key: HDFS-7489
 URL: https://issues.apache.org/jira/browse/HDFS-7489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0, 2.6.0
Reporter: Noah Lorang
Priority: Critical
 Fix For: 2.6.1

 Attachments: HDFS-7489-v1.patch, HDFS-7489-v2.patch, 
 HDFS-7489-v2.patch.1


 Starting after upgrading to 2.5.0 (CDH 5.2.1), we started to see datanodes 
 hanging their heartbeat and requests from clients. After some digging, I 
 identified the culprit as being the checkDiskError() triggered by catching 
 IOExceptions (in our case, SocketExceptions being triggered on one datanode 
 by ReplicaAlreadyExistsExceptions on another datanode).
 Thread dumps reveal that the checkDiskErrors() thread is holding a lock on 
 the FsVolumeList:
 {code}
 Thread-409 daemon prio=10 tid=0x7f4e50200800 nid=0x5b8e runnable 
 [0x7f4e2f855000]
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.list(Native Method)
 at java.io.File.list(File.java:973)
 at java.io.File.listFiles(File.java:1051)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:89)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:257)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:210)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:180)
 - locked 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:1396)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2832)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Other things would then lock the FsDatasetImpl while waiting for the 
 FsVolumeList, e.g.:
 {code}
 DataXceiver for client  at /10.10.0.52:46643 [Receiving block 
 BP-1573746465-127.0.1.1-1352244533715:blk_1073770670_106962574] daemon 
 prio=10 tid=0x7f4e55561000 nid=0x406d waiting for monitor entry 
 [0x7f4e3106d000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextVolume(FsVolumeList.java:64)
 - waiting to lock 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:927)
 - locked 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:101)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:167)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 That lock on the FsDatasetImpl then causes other threads to block:
 {code}
 Thread-127 daemon prio=10 tid=0x7f4e4c67d800 nid=0x2e02 waiting for 
 monitor entry [0x7f4e3339]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:228)
 - waiting to lock 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:436)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:523)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:684)
 at 
 

[jira] [Commented] (HDFS-7489) Incorrect locking in FsVolumeList#checkDirs can hang datanodes

2014-12-09 Thread Noah Lorang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239854#comment-14239854
 ] 

Noah Lorang commented on HDFS-7489:
---

My pleasure. Thanks for getting this upstream so quickly. 

 Incorrect locking in FsVolumeList#checkDirs can hang datanodes
 --

 Key: HDFS-7489
 URL: https://issues.apache.org/jira/browse/HDFS-7489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0, 2.6.0
Reporter: Noah Lorang
Priority: Critical
 Fix For: 2.6.1

 Attachments: HDFS-7489-v1.patch, HDFS-7489-v2.patch, 
 HDFS-7489-v2.patch.1


 Starting after upgrading to 2.5.0 (CDH 5.2.1), we started to see datanodes 
 hanging their heartbeat and requests from clients. After some digging, I 
 identified the culprit as being the checkDiskError() triggered by catching 
 IOExceptions (in our case, SocketExceptions being triggered on one datanode 
 by ReplicaAlreadyExistsExceptions on another datanode).
 Thread dumps reveal that the checkDiskErrors() thread is holding a lock on 
 the FsVolumeList:
 {code}
 Thread-409 daemon prio=10 tid=0x7f4e50200800 nid=0x5b8e runnable 
 [0x7f4e2f855000]
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.list(Native Method)
 at java.io.File.list(File.java:973)
 at java.io.File.listFiles(File.java:1051)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:89)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:257)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:210)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:180)
 - locked 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:1396)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2832)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Other things would then lock the FsDatasetImpl while waiting for the 
 FsVolumeList, e.g.:
 {code}
 DataXceiver for client  at /10.10.0.52:46643 [Receiving block 
 BP-1573746465-127.0.1.1-1352244533715:blk_1073770670_106962574] daemon 
 prio=10 tid=0x7f4e55561000 nid=0x406d waiting for monitor entry 
 [0x7f4e3106d000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextVolume(FsVolumeList.java:64)
 - waiting to lock 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:927)
 - locked 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:101)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:167)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 That lock on the FsDatasetImpl then causes other threads to block:
 {code}
 Thread-127 daemon prio=10 tid=0x7f4e4c67d800 nid=0x2e02 waiting for 
 monitor entry [0x7f4e3339]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:228)
 - waiting to lock 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:436)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:523)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:684)
 at 
 

[jira] [Assigned] (HDFS-7489) Incorrect locking in FsVolumeList#checkDirs can hang datanodes

2014-12-09 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers reassigned HDFS-7489:


Assignee: Noah Lorang

 Incorrect locking in FsVolumeList#checkDirs can hang datanodes
 --

 Key: HDFS-7489
 URL: https://issues.apache.org/jira/browse/HDFS-7489
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 2.5.0, 2.6.0
Reporter: Noah Lorang
Assignee: Noah Lorang
Priority: Critical
 Fix For: 2.6.1

 Attachments: HDFS-7489-v1.patch, HDFS-7489-v2.patch, 
 HDFS-7489-v2.patch.1


 Starting after upgrading to 2.5.0 (CDH 5.2.1), we started to see datanodes 
 hanging their heartbeat and requests from clients. After some digging, I 
 identified the culprit as being the checkDiskError() triggered by catching 
 IOExceptions (in our case, SocketExceptions being triggered on one datanode 
 by ReplicaAlreadyExistsExceptions on another datanode).
 Thread dumps reveal that the checkDiskErrors() thread is holding a lock on 
 the FsVolumeList:
 {code}
 Thread-409 daemon prio=10 tid=0x7f4e50200800 nid=0x5b8e runnable 
 [0x7f4e2f855000]
java.lang.Thread.State: RUNNABLE
 at java.io.UnixFileSystem.list(Native Method)
 at java.io.File.list(File.java:973)
 at java.io.File.listFiles(File.java:1051)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:89)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at org.apache.hadoop.util.DiskChecker.checkDirs(DiskChecker.java:91)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.checkDirs(BlockPoolSlice.java:257)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.checkDirs(FsVolumeImpl.java:210)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.checkDirs(FsVolumeList.java:180)
 - locked 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.checkDataDir(FsDatasetImpl.java:1396)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataNode$5.run(DataNode.java:2832)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 Other things would then lock the FsDatasetImpl while waiting for the 
 FsVolumeList, e.g.:
 {code}
 DataXceiver for client  at /10.10.0.52:46643 [Receiving block 
 BP-1573746465-127.0.1.1-1352244533715:blk_1073770670_106962574] daemon 
 prio=10 tid=0x7f4e55561000 nid=0x406d waiting for monitor entry 
 [0x7f4e3106d000]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList.getNextVolume(FsVolumeList.java:64)
 - waiting to lock 0x00063b182ea0 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeList)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:927)
 - locked 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:101)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.init(BlockReceiver.java:167)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
 at 
 org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
 at 
 org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
 at java.lang.Thread.run(Thread.java:662)
 {code}
 That lock on the FsDatasetImpl then causes other threads to block:
 {code}
 Thread-127 daemon prio=10 tid=0x7f4e4c67d800 nid=0x2e02 waiting for 
 monitor entry [0x7f4e3339]
java.lang.Thread.State: BLOCKED (on object monitor)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockSender.init(BlockSender.java:228)
 - waiting to lock 0x00063b1f9a48 (a 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyBlock(BlockPoolSliceScanner.java:436)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.verifyFirstBlock(BlockPoolSliceScanner.java:523)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scan(BlockPoolSliceScanner.java:684)
 at 
 org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner.scanBlockPoolSlice(BlockPoolSliceScanner.java:650)
 at 
 

[jira] [Commented] (HDFS-7463) Simplify FSNamesystem#getBlockLocationsUpdateTimes

2014-12-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239881#comment-14239881
 ] 

Jing Zhao commented on HDFS-7463:
-

The patch looks good to me. Some comments:
# {{resolvePath}} needs to be put within the read lock.
{code}
+src = dir.resolvePath(pc, src, pathComponents);
+checkOperation(OperationCategory.READ);
+readLock();
{code}
# We should use {{iip.getLatestSnapshotId()}} here instead of 
{{getPathSnapshotId}}.
{code}
+  return new GetBlockLocationsResult(
+  updateAccessTime, inode, iip.getPathSnapshotId(), blocks);
{code}
# GetBlockLocationsResult#inode can be declared as INodeFile. We can also rename
  snapshotId to latestSnapshotId. Besides, it may be simpler to just include an
  INodesInPath object in GetBlockLocationsResult.
# The read lock is acquired twice in {{getBlockLocations}} and 
{{getBlockLocationsInt}}.

 Simplify FSNamesystem#getBlockLocationsUpdateTimes
 --

 Key: HDFS-7463
 URL: https://issues.apache.org/jira/browse/HDFS-7463
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7463.000.patch, HDFS-7463.001.patch, 
 HDFS-7463.002.patch, HDFS-7463.003.patch, HDFS-7463.004.patch, 
 HDFS-7463.005.patch


 Currently {{FSNamesystem#getBlockLocationsUpdateTimes}} holds the read lock 
 to access the blocks. It releases the read lock and then acquires the write 
 lock when it needs to update the access time of the {{INode}}.
 This jira proposes to move the responsibility of the latter steps to the 
 caller to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7498) Simplify the logic in INodesInPath

2014-12-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239894#comment-14239894
 ] 

Haohui Mai commented on HDFS-7498:
--

The patch looks good.

{code}
-  iip.setINode(pos - 1, child.getParent());
{code}

Is it accidental?

 Simplify the logic in INodesInPath
 --

 Key: HDFS-7498
 URL: https://issues.apache.org/jira/browse/HDFS-7498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7498.000.patch


 Currently we have relatively complicated logic in INodesInPath:
 1) It can contain null elements in its INode array, and in 
 {{mkdirRecursively}} these null INodes are replaced with new directories.
 2) Operations like rename may also replace the inode in its INode array
 3) {{getINodes}} requires trimming inodes array if the INodesInPath is 
 derived from a dot-snapshot path
 4) A lot of methods directly use/manipulate its INode array
 We aim to simplify the logic of INodesInPath in this jira. Specifically, we 
 can
 make INodesInPath an immutable data structure and move the inode trimming 
 logic to path resolving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7498) Simplify the logic in INodesInPath

2014-12-09 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239896#comment-14239896
 ] 

Jing Zhao commented on HDFS-7498:
-

Thanks for the review, Haohui!

bq. Is it accidental?

No. After we have INode feature we no longer have inode replacement logic. Thus 
we do not need to reset the parent node in the INodesInPath here.

 Simplify the logic in INodesInPath
 --

 Key: HDFS-7498
 URL: https://issues.apache.org/jira/browse/HDFS-7498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7498.000.patch


 Currently we have relatively complicated logic in INodesInPath:
 1) It can contain null elements in its INode array, and in 
 {{mkdirRecursively}} these null INodes are replaced with new directories.
 2) Operations like rename may also replace the inode in its INode array
 3) {{getINodes}} requires trimming inodes array if the INodesInPath is 
 derived from a dot-snapshot path
 4) A lot of methods directly use/manipulate its INode array
 We aim to simplify the logic of INodesInPath in this jira. Specifically, we 
 can
 make INodesInPath an immutable data structure and move the inode trimming 
 logic to path resolving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7498) Simplify the logic in INodesInPath

2014-12-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239907#comment-14239907
 ] 

Haohui Mai commented on HDFS-7498:
--

+1

 Simplify the logic in INodesInPath
 --

 Key: HDFS-7498
 URL: https://issues.apache.org/jira/browse/HDFS-7498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Attachments: HDFS-7498.000.patch


 Currently we have relatively complicated logic in INodesInPath:
 1) It can contain null elements in its INode array, and in 
 {{mkdirRecursively}} these null INodes are replaced with new directories.
 2) Operations like rename may also replace the inode in its INode array
 3) {{getINodes}} requires trimming inodes array if the INodesInPath is 
 derived from a dot-snapshot path
 4) A lot of methods directly use/manipulate its INode array
 We aim to simplify the logic of INodesInPath in this jira. Specifically, we 
 can
 make INodesInPath an immutable data structure and move the inode trimming 
 logic to path resolving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5574) Remove buffer copy in BlockReader.skip

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239922#comment-14239922
 ] 

Hadoop QA commented on HDFS-5574:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685973/HDFS-5574.006.patch
  against trunk revision 82707b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 352 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestLeaseRecovery2

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8974//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8974//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8974//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8974//console

This message is automatically generated.

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.006.patch, HDFS-5574.v1.patch, 
 HDFS-5574.v2.patch, HDFS-5574.v3.patch, HDFS-5574.v4.patch, HDFS-5574.v5.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7498) Simplify the logic in INodesInPath

2014-12-09 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-7498:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks again for the review, Haohui!

 Simplify the logic in INodesInPath
 --

 Key: HDFS-7498
 URL: https://issues.apache.org/jira/browse/HDFS-7498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7498.000.patch


 Currently we have relatively complicated logic in INodesInPath:
 1) It can contain null elements in its INode array, and in 
 {{mkdirRecursively}} these null INodes are replaced with new directories.
 2) Operations like rename may also replace the inode in its INode array
 3) {{getINodes}} requires trimming inodes array if the INodesInPath is 
 derived from a dot-snapshot path
 4) A lot of methods directly use/manipulate its INode array
 We aim to simplify the logic of INodesInPath in this jira. Specifically, we 
 can
 make INodesInPath an immutable data structure and move the inode trimming 
 logic to path resolving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5578:
-
Attachment: HDFS-5578.000.patch

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
 -

 Key: HDFS-5578
 URL: https://issues.apache.org/jira/browse/HDFS-5578
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 5578-branch-2.patch, 5578-branch-2.patch, 
 5578-trunk.patch, 5578-trunk.patch, HDFS-5578.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-09 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239931#comment-14239931
 ] 

Haohui Mai commented on HDFS-5578:
--

[~apurtell], I rebased your patch on the latest trunk. Does it look okay to you?

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
 -

 Key: HDFS-5578
 URL: https://issues.apache.org/jira/browse/HDFS-5578
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 5578-branch-2.patch, 5578-branch-2.patch, 
 5578-trunk.patch, 5578-trunk.patch, HDFS-5578.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7498) Simplify the logic in INodesInPath

2014-12-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239932#comment-14239932
 ] 

Hudson commented on HDFS-7498:
--

FAILURE: Integrated in Hadoop-trunk-Commit #6679 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6679/])
HDFS-7498. Simplify the logic in INodesInPath. Contributed by Jing Zhao. 
(jing9: rev 5776a41da08af653206bb94d7c76c9c4dcce059a)
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirConcatOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/EncryptionZoneManager.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSnapshotPathINodes.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirStatAndListingOp.java
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirRenameOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirMkdirOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSEditLogLoader.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodesInPath.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirSnapshotOp.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java


 Simplify the logic in INodesInPath
 --

 Key: HDFS-7498
 URL: https://issues.apache.org/jira/browse/HDFS-7498
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.7.0

 Attachments: HDFS-7498.000.patch


 Currently we have relatively complicated logic in INodesInPath:
 1) It can contain null elements in its INode array, and in 
 {{mkdirRecursively}} these null INodes are replaced with new directories.
 2) Operations like rename may also replace the inode in its INode array
 3) {{getINodes}} requires trimming inodes array if the INodesInPath is 
 derived from a dot-snapshot path
 4) A lot of methods directly use/manipulate its INode array
 We aim to simplify the logic of INodesInPath in this jira. Specifically, we 
 can
 make INodesInPath an immutable data structure and move the inode trimming 
 logic to path resolving.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7463) Simplify FSNamesystem#getBlockLocationsUpdateTimes

2014-12-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7463:
-
Attachment: HDFS-7463.006.patch

 Simplify FSNamesystem#getBlockLocationsUpdateTimes
 --

 Key: HDFS-7463
 URL: https://issues.apache.org/jira/browse/HDFS-7463
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7463.000.patch, HDFS-7463.001.patch, 
 HDFS-7463.002.patch, HDFS-7463.003.patch, HDFS-7463.004.patch, 
 HDFS-7463.005.patch, HDFS-7463.006.patch


 Currently {{FSNamesystem#getBlockLocationsUpdateTimes}} holds the read lock 
 to access the blocks. It releases the read lock and then acquires the write 
 lock when it needs to update the access time of the {{INode}}.
 This jira proposes to move the responsibility of the latter steps to the 
 caller to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7481) Add ACL indicator to the Permission Denied exception.

2014-12-09 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-7481:

Hadoop Flags: Reviewed

+1 for the patch.  Thanks, [~vinayrpet]!

bq. -1 tests included. The patch doesn't appear to include any new or modified 
tests.

This patch is only changing an exception message, so I think additional tests 
are unnecessary.

bq. -1 findbugs. The patch appears to introduce 287 new Findbugs (version 
2.0.3) warnings.

This is unrelated.  Nothing was flagged for {{FSPermissionChecker}}.

bq. -1 core tests. The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

This was an unrelated test timeout, and also a known issue with the tests 
needing a bit more heap after the switch to running pre-commit on Java 7.

 Add ACL indicator to the Permission Denied exception.
 ---

 Key: HDFS-7481
 URL: https://issues.apache.org/jira/browse/HDFS-7481
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
Priority: Minor
 Attachments: HDFS-7481-001.patch, HDFS-7481-002.patch


 As mentioned in comment in HDFS-7454 add an ACL indicator similar to ls 
 output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7463) Simplify FSNamesystem#getBlockLocationsUpdateTimes

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239997#comment-14239997
 ] 

Hadoop QA commented on HDFS-7463:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686075/HDFS-7463.006.patch
  against trunk revision 5776a41.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8979//console

This message is automatically generated.

 Simplify FSNamesystem#getBlockLocationsUpdateTimes
 --

 Key: HDFS-7463
 URL: https://issues.apache.org/jira/browse/HDFS-7463
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7463.000.patch, HDFS-7463.001.patch, 
 HDFS-7463.002.patch, HDFS-7463.003.patch, HDFS-7463.004.patch, 
 HDFS-7463.005.patch, HDFS-7463.006.patch


 Currently {{FSNamesystem#getBlockLocationsUpdateTimes}} holds the read lock 
 to access the blocks. It releases the read lock and then acquires the write 
 lock when it needs to update the access time of the {{INode}}.
 This jira proposes to move the responsibility of the latter steps to the 
 caller to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5578) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240010#comment-14240010
 ] 

Hadoop QA commented on HDFS-5578:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686068/HDFS-5578.000.patch
  against trunk revision 5776a41.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 9 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-httpfs hadoop-hdfs-project/hadoop-hdfs-nfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8978//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8978//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-httpfs.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8978//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8978//console

This message is automatically generated.

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments
 -

 Key: HDFS-5578
 URL: https://issues.apache.org/jira/browse/HDFS-5578
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 5578-branch-2.patch, 5578-branch-2.patch, 
 5578-trunk.patch, 5578-trunk.patch, HDFS-5578.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240057#comment-14240057
 ] 

Brandon Li commented on HDFS-7449:
--

Just noticed that I was using ms instead of ns. Will upload a new patch.

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7463) Simplify FSNamesystem#getBlockLocationsUpdateTimes

2014-12-09 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-7463:
-
Attachment: HDFS-7463.007.patch

 Simplify FSNamesystem#getBlockLocationsUpdateTimes
 --

 Key: HDFS-7463
 URL: https://issues.apache.org/jira/browse/HDFS-7463
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-7463.000.patch, HDFS-7463.001.patch, 
 HDFS-7463.002.patch, HDFS-7463.003.patch, HDFS-7463.004.patch, 
 HDFS-7463.005.patch, HDFS-7463.006.patch, HDFS-7463.007.patch


 Currently {{FSNamesystem#getBlockLocationsUpdateTimes}} holds the read lock 
 to access the blocks. It releases the read lock and then acquires the write 
 lock when it needs to update the access time of the {{INode}}.
 This jira proposes to move the responsibility of the latter steps to the 
 caller to simplify the code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240082#comment-14240082
 ] 

Hadoop QA commented on HDFS-7456:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686052/HDFS-7456-004.patch
  against trunk revision 82707b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 287 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHDFSAcl
  org.apache.hadoop.hdfs.TestLeaseRecovery2

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8976//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8976//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8976//console

This message is automatically generated.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7500) Disambiguate hadoop-common/FileSystemShell with hadoop-hdfs/HDFSCommandsGuide

2014-12-09 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HDFS-7500:
--

 Summary: Disambiguate hadoop-common/FileSystemShell with 
hadoop-hdfs/HDFSCommandsGuide
 Key: HDFS-7500
 URL: https://issues.apache.org/jira/browse/HDFS-7500
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer


While working on HADOOP-10908, HADOOP-11353, and HADOOP-11380, it's become 
evident that there needs to be one source of a command reference for Hadoop 
filesystem commands. Currently CommandManual points to HDFSCommandsGuide for 
the 'hadoop fs' command, but FileSystemShell is much more complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7500) Disambiguate hadoop-common/FileSystemShell with hadoop-hdfs/HDFSCommandsGuide

2014-12-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240127#comment-14240127
 ] 

Allen Wittenauer commented on HDFS-7500:


A proposal for a plan of action:

a) Pull all user-level commands out of HDFSCommandsGuide and into 
FileSystemShell (if any are missing)
b) Point the Hadoop command manual at FileSystemShell.
c) Update HDFSCommandsGuide to only include HDFS-specials and/or admin-only 
commands

Thoughts?

 Disambiguate hadoop-common/FileSystemShell with hadoop-hdfs/HDFSCommandsGuide
 -

 Key: HDFS-7500
 URL: https://issues.apache.org/jira/browse/HDFS-7500
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer

 While working on HADOOP-10908, HADOOP-11353, and HADOOP-11380, it's become 
 evident that there needs to be one source of a command reference for Hadoop 
 filesystem commands. Currently CommandManual points to HDFSCommandsGuide for 
 the 'hadoop fs' command, but FileSystemShell is much more complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7500) Disambiguate hadoop-common/FileSystemShell with hadoop-hdfs/HDFSCommandsGuide

2014-12-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240129#comment-14240129
 ] 

Allen Wittenauer commented on HDFS-7500:


It looks like maybe the only problem is actually two fold:

* FileSystemShell uses hdfs dfs instead of hadoop fs in all of the examples
* CommandsManual points to HDFSCommands instead of FileSystemShell.

These are both easily fixed. :)

 Disambiguate hadoop-common/FileSystemShell with hadoop-hdfs/HDFSCommandsGuide
 -

 Key: HDFS-7500
 URL: https://issues.apache.org/jira/browse/HDFS-7500
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer

 While working on HADOOP-10908, HADOOP-11353, and HADOOP-11380, it's become 
 evident that there needs to be one source of a command reference for Hadoop 
 filesystem commands. Currently CommandManual points to HDFSCommandsGuide for 
 the 'hadoop fs' command, but FileSystemShell is much more complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7449:
-
Attachment: HDFS-7449.004.patch

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch, HDFS-7449.004.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240135#comment-14240135
 ] 

Brandon Li commented on HDFS-7449:
--

Uploaded a new patch to use nanoseconds.

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch, HDFS-7449.004.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7456) De-duplicate AclFeature instances with same AclEntries do reduce memory footprint of NameNode

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240148#comment-14240148
 ] 

Hadoop QA commented on HDFS-7456:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686053/HDFS-7456-004.patch
  against trunk revision 82707b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 287 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHDFSAcl

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8977//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8977//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8977//console

This message is automatically generated.

 De-duplicate AclFeature instances with same AclEntries do reduce memory 
 footprint of NameNode
 -

 Key: HDFS-7456
 URL: https://issues.apache.org/jira/browse/HDFS-7456
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinayakumar B
Assignee: Vinayakumar B
 Attachments: HDFS-7456-001.patch, HDFS-7456-002.patch, 
 HDFS-7456-003.patch, HDFS-7456-004.patch


 As discussed  in HDFS-7454 
 [here|https://issues.apache.org/jira/browse/HDFS-7454?focusedCommentId=14229454page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14229454],
  de-duplication of {{AclFeature}} helps in reducing the memory footprint of 
 the namenode.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7461) Reduce impact of laggards on Mover

2014-12-09 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-7461:
---
Attachment: continuousmovement.pdf

Attaching a brief design document to explain the problem, solution and a 
proposed high level design. Requesting comments on this.

 Reduce impact of laggards on Mover
 --

 Key: HDFS-7461
 URL: https://issues.apache.org/jira/browse/HDFS-7461
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer  mover
Affects Versions: 2.6.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: continuousmovement.pdf


 The current Mover logic is as follows :
 {code}
  for (Path target : targetPaths) {
 hasRemaining |= processPath(target.toUri().getPath());
   }
 // wait for pending move to finish and retry the failed migration
 hasRemaining |= Dispatcher.waitForMoveCompletion(storages.targets.values());
 {code}
 The _processPath_ will schedule moves, but it is bounded by the number of 
 concurrent moves (default is 5 per node} .  Once block moves are scheduled,  
 it will wait for ALL  scheduled moves to finish in _waitForMoveCompletion_.
 One slow move could keep the Mover  idle for a long time. 
 It will be a  performance improvement to schedule the next moves as soon as 
 any (source , target) slot is available instead of waiting for all the 
 scheduled moves to finish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7461) Reduce impact of laggards on Mover

2014-12-09 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-7461:
---
Attachment: (was: continuousmovement.pdf)

 Reduce impact of laggards on Mover
 --

 Key: HDFS-7461
 URL: https://issues.apache.org/jira/browse/HDFS-7461
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer  mover
Affects Versions: 2.6.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: continuousmovement.pdf


 The current Mover logic is as follows :
 {code}
  for (Path target : targetPaths) {
 hasRemaining |= processPath(target.toUri().getPath());
   }
 // wait for pending move to finish and retry the failed migration
 hasRemaining |= Dispatcher.waitForMoveCompletion(storages.targets.values());
 {code}
 The _processPath_ will schedule moves, but it is bounded by the number of 
 concurrent moves (default is 5 per node} .  Once block moves are scheduled,  
 it will wait for ALL  scheduled moves to finish in _waitForMoveCompletion_.
 One slow move could keep the Mover  idle for a long time. 
 It will be a  performance improvement to schedule the next moves as soon as 
 any (source , target) slot is available instead of waiting for all the 
 scheduled moves to finish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7461) Reduce impact of laggards on Mover

2014-12-09 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HDFS-7461:
---
Attachment: continuousmovement.pdf

 Reduce impact of laggards on Mover
 --

 Key: HDFS-7461
 URL: https://issues.apache.org/jira/browse/HDFS-7461
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: balancer  mover
Affects Versions: 2.6.0
Reporter: Benoy Antony
Assignee: Benoy Antony
 Attachments: continuousmovement.pdf


 The current Mover logic is as follows :
 {code}
  for (Path target : targetPaths) {
 hasRemaining |= processPath(target.toUri().getPath());
   }
 // wait for pending move to finish and retry the failed migration
 hasRemaining |= Dispatcher.waitForMoveCompletion(storages.targets.values());
 {code}
 The _processPath_ will schedule moves, but it is bounded by the number of 
 concurrent moves (default is 5 per node} .  Once block moves are scheduled,  
 it will wait for ALL  scheduled moves to finish in _waitForMoveCompletion_.
 One slow move could keep the Mover  idle for a long time. 
 It will be a  performance improvement to schedule the next moves as soon as 
 any (source , target) slot is available instead of waiting for all the 
 scheduled moves to finish. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-5574) Remove buffer copy in BlockReader.skip

2014-12-09 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240189#comment-14240189
 ] 

Colin Patrick McCabe commented on HDFS-5574:


Thanks for your patience.  I'll try to look at this later this week.  If anyone 
else wants to check it out before that, that works too

 Remove buffer copy in BlockReader.skip
 --

 Key: HDFS-5574
 URL: https://issues.apache.org/jira/browse/HDFS-5574
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Binglin Chang
Assignee: Binglin Chang
Priority: Trivial
 Attachments: HDFS-5574.006.patch, HDFS-5574.v1.patch, 
 HDFS-5574.v2.patch, HDFS-5574.v3.patch, HDFS-5574.v4.patch, HDFS-5574.v5.patch


 BlockReaderLocal.skip and RemoteBlockReader.skip uses a temp buffer to read 
 data to this buffer, it is not necessary. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240190#comment-14240190
 ] 

Hadoop QA commented on HDFS-7449:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12686107/HDFS-7449.004.patch
  against trunk revision a2e07a5.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs-nfs.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8981//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8981//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/8981//console

This message is automatically generated.

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch, HDFS-7449.004.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-6652) RecoverLease cannot succeed and file cannot be closed under high load

2014-12-09 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang reassigned HDFS-6652:
---

Assignee: Yongjun Zhang

 RecoverLease cannot succeed and file cannot be closed under high load
 -

 Key: HDFS-6652
 URL: https://issues.apache.org/jira/browse/HDFS-6652
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Juan Yu
Assignee: Yongjun Zhang
Priority: Minor
 Attachments: testLeaseRecoveryWithMultiWriters.patch


 When there are multiple clients try to write to the same file frequently, 
 there is chance that block state goes wrong so lease recovery cannot be done 
 and  file cannot be closed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2014-12-09 Thread Harsh J (JIRA)
Harsh J created HDFS-7501:
-

 Summary: TransactionsSinceLastCheckpoint can be negative on SBNs
 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Priority: Trivial


The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
NNStorage.mostRecentCheckpointTxId.

In Standby mode, the former does not increment beyond the loaded or 
last-when-active value, but the latter does change due to checkpoints done 
regularly in this mode. Thereby, the SBN will eventually end up showing 
negative values for TransactionsSinceLastCheckpoint.

This is not an issue as the metric only makes sense to be monitored on the 
Active NameNode, but we should perhaps just show the value 0 by detecting if 
the NN is in SBN form, as allowing a negative number is confusing to view 
within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7502) Fix findbugs warning in hdfs-nfs project

2014-12-09 Thread Brandon Li (JIRA)
Brandon Li created HDFS-7502:


 Summary: Fix findbugs warning in hdfs-nfs project
 Key: HDFS-7502
 URL: https://issues.apache.org/jira/browse/HDFS-7502
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Brandon Li
Assignee: Brandon Li






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7502) Fix findbugs warning in hdfs-nfs project

2014-12-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7502:
-
Attachment: FindBugs Report.html

Uploaded the findbugs warning report.

 Fix findbugs warning in hdfs-nfs project
 

 Key: HDFS-7502
 URL: https://issues.apache.org/jira/browse/HDFS-7502
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: FindBugs Report.html






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7502) Fix findbugs warning in hdfs-nfs project

2014-12-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7502:
-
Component/s: nfs

 Fix findbugs warning in hdfs-nfs project
 

 Key: HDFS-7502
 URL: https://issues.apache.org/jira/browse/HDFS-7502
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: FindBugs Report.html






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7449) Add metrics to NFS gateway

2014-12-09 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240224#comment-14240224
 ] 

Brandon Li commented on HDFS-7449:
--

The findbugs warnings are not introduced by this patch. I've filed HDFS-7502 to 
track the findbugs warning fix.

 Add metrics to NFS gateway
 --

 Key: HDFS-7449
 URL: https://issues.apache.org/jira/browse/HDFS-7449
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: nfs
Affects Versions: 2.7.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7449.001.patch, HDFS-7449.002.patch, 
 HDFS-7449.003.patch, HDFS-7449.004.patch


 Add metrics to collect the NFSv3 handler operations, response time and etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7502) Fix findbugs warning in hdfs-nfs project

2014-12-09 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-7502:
-
Description: 
Found the warning when building HDFS-7449 patch: 
https://builds.apache.org/job/PreCommit-HDFS-Build/8981//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html
 

 Fix findbugs warning in hdfs-nfs project
 

 Key: HDFS-7502
 URL: https://issues.apache.org/jira/browse/HDFS-7502
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: FindBugs Report.html


 Found the warning when building HDFS-7449 patch: 
 https://builds.apache.org/job/PreCommit-HDFS-Build/8981//artifact/patchprocess/newPatchFindbugsWarningshadoop-hdfs-nfs.html
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HDFS-7460) Rewrite httpfs to use new shell framework

2014-12-09 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-7460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240233#comment-14240233
 ] 

Allen Wittenauer commented on HDFS-7460:


Setting HADOOP-10788 as a blocker since it has the catalina functions inside it.

 Rewrite httpfs to use new shell framework
 -

 Key: HDFS-7460
 URL: https://issues.apache.org/jira/browse/HDFS-7460
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Allen Wittenauer

 httpfs shell code was not rewritten during HADOOP-9902. It should be modified 
 to take advantage of the common shell framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7500) Disambiguate hadoop-common/FileSystemShell with hadoop-hdfs/HDFSCommandsGuide

2014-12-09 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HDFS-7500.

Resolution: Duplicate

I'm going to fix this as part of HADOOP-10908 .

 Disambiguate hadoop-common/FileSystemShell with hadoop-hdfs/HDFSCommandsGuide
 -

 Key: HDFS-7500
 URL: https://issues.apache.org/jira/browse/HDFS-7500
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Allen Wittenauer

 While working on HADOOP-10908, HADOOP-11353, and HADOOP-11380, it's become 
 evident that there needs to be one source of a command reference for Hadoop 
 filesystem commands. Currently CommandManual points to HDFSCommandsGuide for 
 the 'hadoop fs' command, but FileSystemShell is much more complete.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HDFS-7501) TransactionsSinceLastCheckpoint can be negative on SBNs

2014-12-09 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu reassigned HDFS-7501:
-

Assignee: Stephen Chu

 TransactionsSinceLastCheckpoint can be negative on SBNs
 ---

 Key: HDFS-7501
 URL: https://issues.apache.org/jira/browse/HDFS-7501
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.5.0
Reporter: Harsh J
Assignee: Stephen Chu
Priority: Trivial

 The metric TransactionsSinceLastCheckpoint is derived as FSEditLog.txid minus 
 NNStorage.mostRecentCheckpointTxId.
 In Standby mode, the former does not increment beyond the loaded or 
 last-when-active value, but the latter does change due to checkpoints done 
 regularly in this mode. Thereby, the SBN will eventually end up showing 
 negative values for TransactionsSinceLastCheckpoint.
 This is not an issue as the metric only makes sense to be monitored on the 
 Active NameNode, but we should perhaps just show the value 0 by detecting if 
 the NN is in SBN form, as allowing a negative number is confusing to view 
 within a chart that tracks it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >