[jira] [Updated] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException

2014-06-21 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6475:


Attachment: HDFS-6475.007.patch

 WebHdfs clients fail without retry because incorrect handling of 
 StandbyException
 -

 Key: HDFS-6475
 URL: https://issues.apache.org/jira/browse/HDFS-6475
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, 
 HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, 
 HDFS-6475.005.patch, HDFS-6475.006.patch, HDFS-6475.007.patch


 With WebHdfs clients connected to a HA HDFS service, the delegation token is 
 previously initialized with the active NN.
 When clients try to issue request, the NN it contacts is stored in a map 
 returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact 
 the NN based on the order, so likely the first one it runs into is StandbyNN. 
 If the StandbyNN doesn't have the updated client crediential, it will throw a 
 s SecurityException that wraps StandbyException.
 The client is expected to retry another NN, but due to the insufficient 
 handling of SecurityException mentioned above, it failed.
 Example message:
 {code}
 {RemoteException={message=Failed to obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: 
 StandbyException, javaCl
 assName=java.lang.SecurityException, exception=SecurityException}}
 org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to 
 obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696)
 at kclient1.kclient$1.run(kclient.java:64)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:356)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
 at kclient1.kclient.main(kclient.java:58)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6527) Edit log corruption due to defered INode removal

2014-06-21 Thread Arun C Murthy (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039716#comment-14039716
 ] 

Arun C Murthy commented on HDFS-6527:
-

Changed fix version to 2.4.1 since it got merged in for rc1.

 Edit log corruption due to defered INode removal
 

 Key: HDFS-6527
 URL: https://issues.apache.org/jira/browse/HDFS-6527
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
Priority: Blocker
 Fix For: 2.4.1

 Attachments: HDFS-6527.branch-2.4.patch, HDFS-6527.trunk.patch, 
 HDFS-6527.v2.patch, HDFS-6527.v3.patch, HDFS-6527.v4.patch, HDFS-6527.v5.patch


 We have seen a SBN crashing with the following error:
 {panel}
 \[Edit log tailer\] ERROR namenode.FSEditLogLoader:
 Encountered exception on operation AddBlockOp
 [path=/xxx,
 penultimateBlock=NULL, lastBlock=blk_111_111, RpcClientId=,
 RpcCallId=-2]
 java.io.FileNotFoundException: File does not exist: /xxx
 {panel}
 This was caused by the deferred removal of deleted inodes from the inode map. 
 Since getAdditionalBlock() acquires FSN read lock and then write lock, a 
 deletion can happen in between. Because of deferred inode removal outside FSN 
 write lock, getAdditionalBlock() can get the deleted inode from the inode map 
 with FSN write lock held. This allow addition of a block to a deleted file.
 As a result, the edit log will contain OP_ADD, OP_DELETE, followed by
  OP_ADD_BLOCK.  This cannot be replayed by NN, so NN doesn't start up or SBN 
 crashes.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException

2014-06-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039718#comment-14039718
 ] 

Yongjun Zhang commented on HDFS-6475:
-

Hi, 

I uploaded a new patch per [~daryn]'s suggestion. On top of what Daryn 
suggested, and because of the exceptions I described in previous update (with 
stack information), I added the logic for InvalidToken in ExceptionHandler:
{code}
   if (e instanceof SecurityException) {
  e = toCause(e);
}
if (e instanceof InvalidToken) {
  e = toCause(e);
}
{code}
This logic is essentially what I wanted to share the original getTrueCause 
method for.

Hi Daryn, would you please help review again?
BTW,  refer to your comment:
{quote}
In saslProcess, just throw the exception instead of running it through 
getTrueCause since it's not a InvalidToken wrapping another exception anymore.
{quote}
I did what you suggested, but I'm still getting InvalidToken exception (see the 
stack described above). So it seems that the exception that saslProcess tries 
to handle comes from different source than what I'm running into. 

Thanks a lot.



 WebHdfs clients fail without retry because incorrect handling of 
 StandbyException
 -

 Key: HDFS-6475
 URL: https://issues.apache.org/jira/browse/HDFS-6475
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, 
 HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, 
 HDFS-6475.005.patch, HDFS-6475.006.patch, HDFS-6475.007.patch


 With WebHdfs clients connected to a HA HDFS service, the delegation token is 
 previously initialized with the active NN.
 When clients try to issue request, the NN it contacts is stored in a map 
 returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact 
 the NN based on the order, so likely the first one it runs into is StandbyNN. 
 If the StandbyNN doesn't have the updated client crediential, it will throw a 
 s SecurityException that wraps StandbyException.
 The client is expected to retry another NN, but due to the insufficient 
 handling of SecurityException mentioned above, it failed.
 Example message:
 {code}
 {RemoteException={message=Failed to obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: 
 StandbyException, javaCl
 assName=java.lang.SecurityException, exception=SecurityException}}
 org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to 
 obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696)
 at kclient1.kclient$1.run(kclient.java:64)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:356)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
 at kclient1.kclient.main(kclient.java:58)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6578) add toString method to DatanodeStorage etc for easier debugging

2014-06-21 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6578:


Attachment: HDFS-6578.001.patch

 add toString method to DatanodeStorage etc for easier debugging
 ---

 Key: HDFS-6578
 URL: https://issues.apache.org/jira/browse/HDFS-6578
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6578.001.patch


 It seems to be nice to add a toString() method for DatanodeStorage class, so 
 we can print out its key info easier while doing debuging.
 Another thing is, in the end of BlockManager#processReport, there is the 
 following message,
 {code}
blockLog.info(BLOCK* processReport: from storage  + 
 storage.getStorageID()
 +  node  + nodeID + , blocks:  + newReport.getNumberOfBlocks()
 + , processing time:  + (endTime - startTime) +  msecs);
 return !node.hasStaleStorages();
 {code}
 We could add node.hasStaleStorages() to the log, and possibly replace 
 storage.getSorateID() with the suggested storage.toString().
 Any comments? thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6578) add toString method to DatanodeStorage etc for easier debugging

2014-06-21 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HDFS-6578:


Status: Patch Available  (was: Open)

 add toString method to DatanodeStorage etc for easier debugging
 ---

 Key: HDFS-6578
 URL: https://issues.apache.org/jira/browse/HDFS-6578
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6578.001.patch


 It seems to be nice to add a toString() method for DatanodeStorage class, so 
 we can print out its key info easier while doing debuging.
 Another thing is, in the end of BlockManager#processReport, there is the 
 following message,
 {code}
blockLog.info(BLOCK* processReport: from storage  + 
 storage.getStorageID()
 +  node  + nodeID + , blocks:  + newReport.getNumberOfBlocks()
 + , processing time:  + (endTime - startTime) +  msecs);
 return !node.hasStaleStorages();
 {code}
 We could add node.hasStaleStorages() to the log, and possibly replace 
 storage.getSorateID() with the suggested storage.toString().
 Any comments? thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6578) add toString method to DatanodeStorage etc for easier debugging

2014-06-21 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039728#comment-14039728
 ] 

Yongjun Zhang commented on HDFS-6578:
-

HI [~arpitagarwal], thanks for your comments earlier. Yes, that's what I 
thought. Including these three piece of info should be helpful.
I just uploaded a patch, including the comments I wanted to add for HDFS-6577. 
Thanks for review!


 add toString method to DatanodeStorage etc for easier debugging
 ---

 Key: HDFS-6578
 URL: https://issues.apache.org/jira/browse/HDFS-6578
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6578.001.patch


 It seems to be nice to add a toString() method for DatanodeStorage class, so 
 we can print out its key info easier while doing debuging.
 Another thing is, in the end of BlockManager#processReport, there is the 
 following message,
 {code}
blockLog.info(BLOCK* processReport: from storage  + 
 storage.getStorageID()
 +  node  + nodeID + , blocks:  + newReport.getNumberOfBlocks()
 + , processing time:  + (endTime - startTime) +  msecs);
 return !node.hasStaleStorages();
 {code}
 We could add node.hasStaleStorages() to the log, and possibly replace 
 storage.getSorateID() with the suggested storage.toString().
 Any comments? thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6475) WebHdfs clients fail without retry because incorrect handling of StandbyException

2014-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039749#comment-14039749
 ] 

Hadoop QA commented on HDFS-6475:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651811/HDFS-6475.007.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ipc.TestSaslRPC
  org.apache.hadoop.hdfs.web.TestWebHdfsTokens

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7198//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7198//console

This message is automatically generated.

 WebHdfs clients fail without retry because incorrect handling of 
 StandbyException
 -

 Key: HDFS-6475
 URL: https://issues.apache.org/jira/browse/HDFS-6475
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: ha, webhdfs
Affects Versions: 2.4.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6475.001.patch, HDFS-6475.002.patch, 
 HDFS-6475.003.patch, HDFS-6475.003.patch, HDFS-6475.004.patch, 
 HDFS-6475.005.patch, HDFS-6475.006.patch, HDFS-6475.007.patch


 With WebHdfs clients connected to a HA HDFS service, the delegation token is 
 previously initialized with the active NN.
 When clients try to issue request, the NN it contacts is stored in a map 
 returned by DFSUtil.getNNServiceRpcAddresses(conf). And the client contact 
 the NN based on the order, so likely the first one it runs into is StandbyNN. 
 If the StandbyNN doesn't have the updated client crediential, it will throw a 
 s SecurityException that wraps StandbyException.
 The client is expected to retry another NN, but due to the insufficient 
 handling of SecurityException mentioned above, it failed.
 Example message:
 {code}
 {RemoteException={message=Failed to obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: 
 StandbyException, javaCl
 assName=java.lang.SecurityException, exception=SecurityException}}
 org.apache.hadoop.ipc.RemoteException(java.lang.SecurityException): Failed to 
 obtain user group information: 
 org.apache.hadoop.security.token.SecretManager$InvalidToken: StandbyException
 at 
 org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(JsonUtil.java:159)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(WebHdfsFileSystem.java:325)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$700(WebHdfsFileSystem.java:107)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.getResponse(WebHdfsFileSystem.java:635)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem$AbstractRunner.run(WebHdfsFileSystem.java:542)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.run(WebHdfsFileSystem.java:431)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getHdfsFileStatus(WebHdfsFileSystem.java:685)
 at 
 org.apache.hadoop.hdfs.web.WebHdfsFileSystem.getFileStatus(WebHdfsFileSystem.java:696)
 at kclient1.kclient$1.run(kclient.java:64)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:356)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1528)
 at kclient1.kclient.main(kclient.java:58)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
 {code}



--
This message was sent by 

[jira] [Commented] (HDFS-6578) add toString method to DatanodeStorage etc for easier debugging

2014-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039759#comment-14039759
 ] 

Hadoop QA commented on HDFS-6578:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12651813/HDFS-6578.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7199//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7199//console

This message is automatically generated.

 add toString method to DatanodeStorage etc for easier debugging
 ---

 Key: HDFS-6578
 URL: https://issues.apache.org/jira/browse/HDFS-6578
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6578.001.patch


 It seems to be nice to add a toString() method for DatanodeStorage class, so 
 we can print out its key info easier while doing debuging.
 Another thing is, in the end of BlockManager#processReport, there is the 
 following message,
 {code}
blockLog.info(BLOCK* processReport: from storage  + 
 storage.getStorageID()
 +  node  + nodeID + , blocks:  + newReport.getNumberOfBlocks()
 + , processing time:  + (endTime - startTime) +  msecs);
 return !node.hasStaleStorages();
 {code}
 We could add node.hasStaleStorages() to the log, and possibly replace 
 storage.getSorateID() with the suggested storage.toString().
 Any comments? thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6535) HDFS quota update is wrong when file is appended

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039777#comment-14039777
 ] 

Hudson commented on HDFS-6535:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #590 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/590/])
HDFS-6535. HDFS quota update is wrong when file is appended. Contributed by 
George Wong. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604226)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java


 HDFS quota update is wrong when file is appended
 

 Key: HDFS-6535
 URL: https://issues.apache.org/jira/browse/HDFS-6535
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: George Wong
Assignee: George Wong
 Fix For: 2.5.0

 Attachments: HDFS-6535.patch, HDFS-6535_v1.patch, TestHDFSQuota.java


 when a file in the directory with Quota feature is appended, the cached disk 
 consumption should be updated. 
 But currently, the update is wrong.
 Use the uploaded UT to reproduce this bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6222) Remove background token renewer from webhdfs

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039779#comment-14039779
 ] 

Hudson commented on HDFS-6222:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #590 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/590/])
HDFS-6222. Remove background token renewer from webhdfs. Contributed by Rushabh 
Shah and Daryn Sharp. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604300)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java


 Remove background token renewer from webhdfs
 

 Key: HDFS-6222
 URL: https://issues.apache.org/jira/browse/HDFS-6222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6222.branch-2-v2.patch, 
 HDFS-6222.branch-2-v3.patch, HDFS-6222.branch-2.patch, 
 HDFS-6222.branch-2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v2.patch, 
 HDFS-6222.trunk-v3.patch, HDFS-6222.trunk.patch, HDFS-6222.trunk.patch


 The background token renewer is a source of problems for long-running 
 daemons.  Webhdfs should lazy fetch a new token when it receives an 
 InvalidToken exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6557) Move the reference of fsimage to FSNamesystem

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039782#comment-14039782
 ] 

Hudson commented on HDFS-6557:
--

SUCCESS: Integrated in Hadoop-Yarn-trunk #590 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/590/])
HDFS-6557. Move the reference of fsimage to FSNamesystem. Contributed by Haohui 
Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604242)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java


 Move the reference of fsimage to FSNamesystem
 -

 Key: HDFS-6557
 URL: https://issues.apache.org/jira/browse/HDFS-6557
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HDFS-6557.000.patch, HDFS-6557.001.patch


 Per the suggestion from HDFS-6480. {{FSDirectory}} becomes a in-memory data 
 structure so that the reference of fsimage should be moved to 
 {{FSNamesystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6222) Remove background token renewer from webhdfs

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039839#comment-14039839
 ] 

Hudson commented on HDFS-6222:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1781 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1781/])
HDFS-6222. Remove background token renewer from webhdfs. Contributed by Rushabh 
Shah and Daryn Sharp. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604300)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java


 Remove background token renewer from webhdfs
 

 Key: HDFS-6222
 URL: https://issues.apache.org/jira/browse/HDFS-6222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6222.branch-2-v2.patch, 
 HDFS-6222.branch-2-v3.patch, HDFS-6222.branch-2.patch, 
 HDFS-6222.branch-2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v2.patch, 
 HDFS-6222.trunk-v3.patch, HDFS-6222.trunk.patch, HDFS-6222.trunk.patch


 The background token renewer is a source of problems for long-running 
 daemons.  Webhdfs should lazy fetch a new token when it receives an 
 InvalidToken exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6557) Move the reference of fsimage to FSNamesystem

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039842#comment-14039842
 ] 

Hudson commented on HDFS-6557:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1781 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1781/])
HDFS-6557. Move the reference of fsimage to FSNamesystem. Contributed by Haohui 
Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604242)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java


 Move the reference of fsimage to FSNamesystem
 -

 Key: HDFS-6557
 URL: https://issues.apache.org/jira/browse/HDFS-6557
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HDFS-6557.000.patch, HDFS-6557.001.patch


 Per the suggestion from HDFS-6480. {{FSDirectory}} becomes a in-memory data 
 structure so that the reference of fsimage should be moved to 
 {{FSNamesystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6535) HDFS quota update is wrong when file is appended

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039837#comment-14039837
 ] 

Hudson commented on HDFS-6535:
--

FAILURE: Integrated in Hadoop-Hdfs-trunk #1781 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1781/])
HDFS-6535. HDFS quota update is wrong when file is appended. Contributed by 
George Wong. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604226)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java


 HDFS quota update is wrong when file is appended
 

 Key: HDFS-6535
 URL: https://issues.apache.org/jira/browse/HDFS-6535
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: George Wong
Assignee: George Wong
 Fix For: 2.5.0

 Attachments: HDFS-6535.patch, HDFS-6535_v1.patch, TestHDFSQuota.java


 when a file in the directory with Quota feature is appended, the cached disk 
 consumption should be updated. 
 But currently, the update is wrong.
 Use the uploaded UT to reproduce this bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6222) Remove background token renewer from webhdfs

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039863#comment-14039863
 ] 

Hudson commented on HDFS-6222:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1808 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1808/])
HDFS-6222. Remove background token renewer from webhdfs. Contributed by Rushabh 
Shah and Daryn Sharp. (cnauroth: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604300)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/security/token/delegation/DelegationTokenIdentifier.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/SWebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTokens.java


 Remove background token renewer from webhdfs
 

 Key: HDFS-6222
 URL: https://issues.apache.org/jira/browse/HDFS-6222
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: webhdfs
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 3.0.0, 2.5.0

 Attachments: HDFS-6222.branch-2-v2.patch, 
 HDFS-6222.branch-2-v3.patch, HDFS-6222.branch-2.patch, 
 HDFS-6222.branch-2.patch, HDFS-6222.trunk-v2.patch, HDFS-6222.trunk-v2.patch, 
 HDFS-6222.trunk-v3.patch, HDFS-6222.trunk.patch, HDFS-6222.trunk.patch


 The background token renewer is a source of problems for long-running 
 daemons.  Webhdfs should lazy fetch a new token when it receives an 
 InvalidToken exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6535) HDFS quota update is wrong when file is appended

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039861#comment-14039861
 ] 

Hudson commented on HDFS-6535:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1808 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1808/])
HDFS-6535. HDFS quota update is wrong when file is appended. Contributed by 
George Wong. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604226)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestDiskspaceQuotaUpdate.java


 HDFS quota update is wrong when file is appended
 

 Key: HDFS-6535
 URL: https://issues.apache.org/jira/browse/HDFS-6535
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.4.0
Reporter: George Wong
Assignee: George Wong
 Fix For: 2.5.0

 Attachments: HDFS-6535.patch, HDFS-6535_v1.patch, TestHDFSQuota.java


 when a file in the directory with Quota feature is appended, the cached disk 
 consumption should be updated. 
 But currently, the update is wrong.
 Use the uploaded UT to reproduce this bug.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6557) Move the reference of fsimage to FSNamesystem

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039866#comment-14039866
 ] 

Hudson commented on HDFS-6557:
--

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1808 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1808/])
HDFS-6557. Move the reference of fsimage to FSNamesystem. Contributed by Haohui 
Mai. (wheat9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604242)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImage.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/NameNode.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NameNodeAdapter.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestFSPermissionChecker.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java


 Move the reference of fsimage to FSNamesystem
 -

 Key: HDFS-6557
 URL: https://issues.apache.org/jira/browse/HDFS-6557
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Haohui Mai
Assignee: Haohui Mai
 Fix For: 2.5.0

 Attachments: HDFS-6557.000.patch, HDFS-6557.001.patch


 Per the suggestion from HDFS-6480. {{FSDirectory}} becomes a in-memory data 
 structure so that the reference of fsimage should be moved to 
 {{FSNamesystem}}.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6578) add toString method to DatanodeStorage etc for easier debugging

2014-06-21 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039930#comment-14039930
 ] 

Arpit Agarwal commented on HDFS-6578:
-

Hi [~yzhangal],

bq. _BlockManager.processReport accumulates information of prior calls_
processReport does not accumulate results. We can just say _The result of the 
last BlockManager.processReport call is accurate_?

 add toString method to DatanodeStorage etc for easier debugging
 ---

 Key: HDFS-6578
 URL: https://issues.apache.org/jira/browse/HDFS-6578
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HDFS-6578.001.patch


 It seems to be nice to add a toString() method for DatanodeStorage class, so 
 we can print out its key info easier while doing debuging.
 Another thing is, in the end of BlockManager#processReport, there is the 
 following message,
 {code}
blockLog.info(BLOCK* processReport: from storage  + 
 storage.getStorageID()
 +  node  + nodeID + , blocks:  + newReport.getNumberOfBlocks()
 + , processing time:  + (endTime - startTime) +  msecs);
 return !node.hasStaleStorages();
 {code}
 We could add node.hasStaleStorages() to the log, and possibly replace 
 storage.getSorateID() with the suggested storage.toString().
 Any comments? thanks.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-4667) Capture renamed files/directories in snapshot diff report

2014-06-21 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-4667:


   Resolution: Fixed
Fix Version/s: 2.5.0
   Status: Resolved  (was: Patch Available)

Thanks again, Binglin and Nicholas! I've committed this to trunk and branch-2.

 Capture renamed files/directories in snapshot diff report
 -

 Key: HDFS-4667
 URL: https://issues.apache.org/jira/browse/HDFS-4667
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.5.0

 Attachments: HDFS-4667.002.patch, HDFS-4667.002.patch, 
 HDFS-4667.003.patch, HDFS-4667.004.patch, HDFS-4667.demo.patch, 
 HDFS-4667.v1.patch, getfullname-snapshot-support.patch


 Currently in the diff report we only show file/dir creation, deletion and 
 modification. After rename with snapshots is supported, renamed file/dir 
 should also be captured in the diff report.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-4667) Capture renamed files/directories in snapshot diff report

2014-06-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-4667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039985#comment-14039985
 ] 

Hudson commented on HDFS-4667:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #5750 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5750/])
HDFS-4667. Capture renamed files/directories in snapshot diff report. 
Contributed by Jing Zhao and Binglin Chang. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1604488)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocol/SnapshotDiffReport.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/protocolPB/PBHelper.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectoryAttributes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFileAttributes.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeReference.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectoryWithSnapshotFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileWithSnapshotFeature.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/INodeDirectorySnapshottable.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/proto/hdfs.proto
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/site/xdoc/HdfsSnapshots.xml
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestFullPathNameWithSnapshot.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestRenameWithSnapshots.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDiffReport.java


 Capture renamed files/directories in snapshot diff report
 -

 Key: HDFS-4667
 URL: https://issues.apache.org/jira/browse/HDFS-4667
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
 Fix For: 2.5.0

 Attachments: HDFS-4667.002.patch, HDFS-4667.002.patch, 
 HDFS-4667.003.patch, HDFS-4667.004.patch, HDFS-4667.demo.patch, 
 HDFS-4667.v1.patch, getfullname-snapshot-support.patch


 Currently in the diff report we only show file/dir creation, deletion and 
 modification. After rename with snapshots is supported, renamed file/dir 
 should also be captured in the diff report.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HDFS-6585) INodesInPath.resolve is called multiple times in FSNamesystem.setPermission

2014-06-21 Thread Zhilei Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhilei Xu updated HDFS-6585:


Attachment: patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt

 INodesInPath.resolve is called multiple times in FSNamesystem.setPermission
 ---

 Key: HDFS-6585
 URL: https://issues.apache.org/jira/browse/HDFS-6585
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Zhilei Xu
Assignee: Zhilei Xu
  Labels: patch
 Attachments: patch_ab60af58e03b323dd4b18d32c4def1f008b98822.txt, 
 patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt


 Most of the APIs (both internal and external) in FSNamesystem calls 
 INodesInPath.resolve() to get the list of INodes corresponding to a file 
 path. Usually one API will call resolve() multiple times and that's a waste 
 of time.
 This issue particularly refers to FSNamesystem.setPermission, which calls 
 resolve() twice indirectly: one from checkOwner(), another from 
 dir.setPermission().
 Should save the result of resolve(), and use it whenever possible throughout 
 the lifetime of an API call, instead of making new resolve() calls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6584) Support archival storage

2014-06-21 Thread Mark Paget (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040032#comment-14040032
 ] 

Mark Paget commented on HDFS-6584:
--

Perhaps rack topology could allow for tagging low priority. Then manual 
mechanism to tag or automation for least frequently used.

 Support archival storage
 

 Key: HDFS-6584
 URL: https://issues.apache.org/jira/browse/HDFS-6584
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: datanode, namenode
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze

 In most of the Hadoop clusters, as more and more data is stored for longer 
 time, the demand for storage is outstripping the compute. Hadoop needs a cost 
 effective and easy to manage solution to meet this demand for storage. 
 Current solution is:
 - Delete the old unused data. This comes at operational cost of identifying 
 unnecessary data and deleting them manually.
 - Add more nodes to the clusters. This adds along with storage capacity 
 unnecessary compute capacity to the cluster.
 Hadoop needs a solution to decouple growing storage capacity from compute 
 capacity. Nodes with higher density and less expensive storage with low 
 compute power are becoming available and can be used as cold storage in the 
 clusters. Based on policy the data from hot storage can be moved to cold 
 storage. Adding more nodes to the cold storage can grow the storage 
 independent of the compute capacity in the cluster.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HDFS-6585) INodesInPath.resolve is called multiple times in FSNamesystem.setPermission

2014-06-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-6585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14040035#comment-14040035
 ] 

Hadoop QA commented on HDFS-6585:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12651837/patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/7200//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/7200//console

This message is automatically generated.

 INodesInPath.resolve is called multiple times in FSNamesystem.setPermission
 ---

 Key: HDFS-6585
 URL: https://issues.apache.org/jira/browse/HDFS-6585
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Zhilei Xu
Assignee: Zhilei Xu
  Labels: patch
 Attachments: patch_ab60af58e03b323dd4b18d32c4def1f008b98822.txt, 
 patch_f15b7d505f12213f1ee9fb5ddb4bdaa64f9f623d.txt


 Most of the APIs (both internal and external) in FSNamesystem calls 
 INodesInPath.resolve() to get the list of INodes corresponding to a file 
 path. Usually one API will call resolve() multiple times and that's a waste 
 of time.
 This issue particularly refers to FSNamesystem.setPermission, which calls 
 resolve() twice indirectly: one from checkOwner(), another from 
 dir.setPermission().
 Should save the result of resolve(), and use it whenever possible throughout 
 the lifetime of an API call, instead of making new resolve() calls.



--
This message was sent by Atlassian JIRA
(v6.2#6252)