[jira] [Commented] (HDFS-5259) Support client which combines appended data with old data before sends it to NFS server

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785924#comment-13785924
 ] 

Hadoop QA commented on HDFS-5259:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606733/HDFS-5259.003.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5093//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5093//console

This message is automatically generated.

 Support client which combines appended data with old data before sends it to 
 NFS server
 ---

 Key: HDFS-5259
 URL: https://issues.apache.org/jira/browse/HDFS-5259
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Yesha Vora
Assignee: Brandon Li
 Attachments: HDFS-5259.000.patch, HDFS-5259.001.patch, 
 HDFS-5259.003.patch


 The append does not work with some Linux client. The Client gets 
 Input/output Error when it tries to append. And NFS server considers it as 
 random write and fails the request.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5299) DFS client hangs in updatePipeline RPC when failover happened

2013-10-04 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5299:


Status: Patch Available  (was: Open)

 DFS client hangs in updatePipeline RPC when failover happened
 -

 Key: HDFS-5299
 URL: https://issues.apache.org/jira/browse/HDFS-5299
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.0-beta, 3.0.0
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5299.patch


 DFSClient got hanged in updatedPipeline call to namenode when the failover 
 happened at exactly sametime.
 When we digged down, issue found to be with handling the RetryCache in 
 updatePipeline.
 Here are the steps :
 1. Client was writing slowly.
 2. One of the datanode was down and updatePipeline was called to ANN.
 3. Call reached the ANN, while processing updatePipeline call it got shutdown.
 3. Now Client retried (Since the api marked as AtMostOnce) to another 
 NameNode. at that time still NN was in STANDBY and got StandbyException.
 4. Now one more time client failover happened. 
 5. Now SNN became Active.
 6. Client called to current ANN again for updatePipeline, 
 Now client call got hanged in NN, waiting for the cached call with same 
 callid to be over. But this cached call is already got over last time with 
 StandbyException.
 Conclusion :
 Always whenever the new entry is added to cache we need to update the result 
 of the call before returning the call or throwing exception.
 I can see similar issue multiple RPCs in FSNameSystem.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5299) DFS client hangs in updatePipeline RPC when failover happened

2013-10-04 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5299:


Attachment: HDFS-5299.patch

Attaching the patch. Please review

 DFS client hangs in updatePipeline RPC when failover happened
 -

 Key: HDFS-5299
 URL: https://issues.apache.org/jira/browse/HDFS-5299
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5299.patch


 DFSClient got hanged in updatedPipeline call to namenode when the failover 
 happened at exactly sametime.
 When we digged down, issue found to be with handling the RetryCache in 
 updatePipeline.
 Here are the steps :
 1. Client was writing slowly.
 2. One of the datanode was down and updatePipeline was called to ANN.
 3. Call reached the ANN, while processing updatePipeline call it got shutdown.
 3. Now Client retried (Since the api marked as AtMostOnce) to another 
 NameNode. at that time still NN was in STANDBY and got StandbyException.
 4. Now one more time client failover happened. 
 5. Now SNN became Active.
 6. Client called to current ANN again for updatePipeline, 
 Now client call got hanged in NN, waiting for the cached call with same 
 callid to be over. But this cached call is already got over last time with 
 StandbyException.
 Conclusion :
 Always whenever the new entry is added to cache we need to update the result 
 of the call before returning the call or throwing exception.
 I can see similar issue multiple RPCs in FSNameSystem.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Vinay (JIRA)
Vinay created HDFS-5300:
---

 Summary: FSNameSystem#deleteSnapshot() should not check owner in 
case of permissions disabled
 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinay
Assignee: Vinay


FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
disabled

{code:java}  checkOperation(OperationCategory.WRITE);
  if (isInSafeMode()) {
throw new SafeModeException(
Cannot delete snapshot for  + snapshotRoot, safeMode);
  }
  FSPermissionChecker pc = getPermissionChecker();
  checkOwner(pc, snapshotRoot);

  BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
  ListINode removedINodes = new ChunkedArrayListINode();
  dir.writeLock();{code}

should check owner only in case of permissions enabled as its done for all 
other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5300:


Attachment: HDFS-5300.patch

Attached the patch, Please review

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5300:


Affects Version/s: 3.0.0
   2.1.0-beta
   Status: Patch Available  (was: Open)

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.1.0-beta, 3.0.0
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785954#comment-13785954
 ] 

Jing Zhao commented on HDFS-5300:
-

Thanks for the fix Vinay! So for the new unit test, I think maybe it's better 
to put it in TestSnapshotDeletion.java?

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785957#comment-13785957
 ] 

Vinay commented on HDFS-5300:
-

Oh.. Thanks Jing. I will move the unit test to TestSnapshotDeletion.java

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Vinay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinay updated HDFS-5300:


Attachment: HDFS-5300.patch

Here is the updated patch

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5299) DFS client hangs in updatePipeline RPC when failover happened

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13785987#comment-13785987
 ] 

Hadoop QA commented on HDFS-5299:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606736/HDFS-5299.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5094//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5094//console

This message is automatically generated.

 DFS client hangs in updatePipeline RPC when failover happened
 -

 Key: HDFS-5299
 URL: https://issues.apache.org/jira/browse/HDFS-5299
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5299.patch


 DFSClient got hanged in updatedPipeline call to namenode when the failover 
 happened at exactly sametime.
 When we digged down, issue found to be with handling the RetryCache in 
 updatePipeline.
 Here are the steps :
 1. Client was writing slowly.
 2. One of the datanode was down and updatePipeline was called to ANN.
 3. Call reached the ANN, while processing updatePipeline call it got shutdown.
 3. Now Client retried (Since the api marked as AtMostOnce) to another 
 NameNode. at that time still NN was in STANDBY and got StandbyException.
 4. Now one more time client failover happened. 
 5. Now SNN became Active.
 6. Client called to current ANN again for updatePipeline, 
 Now client call got hanged in NN, waiting for the cached call with same 
 callid to be over. But this cached call is already got over last time with 
 StandbyException.
 Conclusion :
 Always whenever the new entry is added to cache we need to update the result 
 of the call before returning the call or throwing exception.
 I can see similar issue multiple RPCs in FSNameSystem.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5283) NN not coming out of startup safemode due to under construction blocks only inside snapshots also counted in safemode threshhold

2013-10-04 Thread Tsz Wo (Nicholas), SZE (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786021#comment-13786021
 ] 

Tsz Wo (Nicholas), SZE commented on HDFS-5283:
--

 ... My only doubt is, why there are different behaviors with file delete and 
 directory delete. Not changing Inodes recursively was intentional or its an 
 issue? 

It is intentional.  Otherwise, the running time of recordModification(..) 
becomes O(subtree size).  For a non-WithSnapshot INode, the state (in current 
state or in some snapshot state) is determined by its parent.

 NN not coming out of startup safemode due to under construction blocks only 
 inside snapshots also counted in safemode threshhold
 

 Key: HDFS-5283
 URL: https://issues.apache.org/jira/browse/HDFS-5283
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5283.000.patch, HDFS-5283.patch, HDFS-5283.patch


 This is observed in one of our env:
 1. A MR Job was running which has created some temporary files and was 
 writing to them.
 2. Snapshot was taken
 3. And Job was killed and temporary files were deleted.
 4. Namenode restarted.
 5. After restart Namenode was in safemode waiting for blocks
 Analysis
 -
 1. Since the snapshot taken also includes the temporary files which were 
 open, and later original files are deleted.
 2. UnderConstruction blocks count was taken from leases. not considered the 
 UC blocks only inside snapshots
 3. So safemode threshold count was more and NN did not come out of safemode



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786045#comment-13786045
 ] 

Hadoop QA commented on HDFS-5300:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606745/HDFS-5300.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 eclipse:eclipse{color}.  The patch failed to build with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5095//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5095//console

This message is automatically generated.

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5283) NN not coming out of startup safemode due to under construction blocks only inside snapshots also counted in safemode threshhold

2013-10-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786051#comment-13786051
 ] 

Vinay commented on HDFS-5283:
-

bq. It is intentional. Otherwise, the running time of recordModification(..) 
becomes O(subtree size). For a non-WithSnapshot INode, the state (in current 
state or in some snapshot state) is determined by its parent.
Thanks for the explanation Nicholas. 

 NN not coming out of startup safemode due to under construction blocks only 
 inside snapshots also counted in safemode threshhold
 

 Key: HDFS-5283
 URL: https://issues.apache.org/jira/browse/HDFS-5283
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5283.000.patch, HDFS-5283.patch, HDFS-5283.patch


 This is observed in one of our env:
 1. A MR Job was running which has created some temporary files and was 
 writing to them.
 2. Snapshot was taken
 3. And Job was killed and temporary files were deleted.
 4. Namenode restarted.
 5. After restart Namenode was in safemode waiting for blocks
 Analysis
 -
 1. Since the snapshot taken also includes the temporary files which were 
 open, and later original files are deleted.
 2. UnderConstruction blocks count was taken from leases. not considered the 
 UC blocks only inside snapshots
 3. So safemode threshold count was more and NN did not come out of safemode



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5178) DataTransferProtocol changes for supporting mutiple storages per DN

2013-10-04 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786169#comment-13786169
 ] 

Junping Du commented on HDFS-5178:
--

The test failure on TestDataTransferProtocol seems unrelated as same failure 
happens on HDFS-2832 branch already (related to chooseTarget()). Hi [~szetszwo] 
and [~arpitagarwal], do you think we can fix unit test later?

 DataTransferProtocol changes for supporting mutiple storages per DN
 ---

 Key: HDFS-5178
 URL: https://issues.apache.org/jira/browse/HDFS-5178
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode
Affects Versions: Heterogeneous Storage (HDFS-2832)
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HDFS-5178-v1.patch, HDFS-5178-v2.patch


 Per discussion in HDFS-5157, DataTransferProtocol should be updated to add 
 StorageID info in some methods, i.e. writeBlock(), replaceBlock(), etc.
 After that, BlockReceiver (sender also) and receiveBlock can operate on 
 target storage with new parameter.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5294) DistributedFileSystem#getFileLinkStatus should not fully qualify the link target

2013-10-04 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786311#comment-13786311
 ] 

Daryn Sharp commented on HDFS-5294:
---

It's a dup in concept, but addressing the issue described will require a 
specific change to {{DistributedFileSystem}}'s {{getFileLinkStatus}} and 
{{getLinkTarget}} so I think it should stay open in addition to the common jira.

bq. [...] t will also affect getFileStatus, listStatus, getLocatedFileStatus, 
resolvePath, listCorruptFileBlocks, globStatus, createSnapshot, etc. [...] I 
really don't think we should do this unless we also do symlink resolution 
server-side to avoid doing N symlink resolution RPCs every time we use a path 
with N symlinks in it.

I think there's some misunderstanding.  The issue of whether the client is able 
to obtain the exact symlink target is orthogonal from whether the other methods 
return a {{FileStatus}} with a (un)qualified path.

 DistributedFileSystem#getFileLinkStatus should not fully qualify the link 
 target
 

 Key: HDFS-5294
 URL: https://issues.apache.org/jira/browse/HDFS-5294
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp

 The NN returns a {{FileStatus}} containing the exact link target as specified 
 by the user at creation.  However, 
 {{DistributedFileSystem#getFileLinkStatus}} explicit overwrites the target 
 with the fully scheme qualified path lookup.  This causes multiple issues 
 such as:
 # Prevents clients from discerning if the target is relative or absolute
 # Mangles a target that is not intended to be a path
 # Causes incorrect resolution with multi-layered filesystems - ie. the link 
 should be resolved relative to a higher level fs (ie. viewfs, chroot, 
 filtered, etc)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5119) Persist CacheManager state in the edit log

2013-10-04 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HDFS-5119:


Hadoop Flags: Reviewed

+1 for the patch, after addressing one minor thing in the JavaDoc shown below.  
{{unprotectedAddCachePool}} is an add method, but the JavaDoc says remove.  
Please feel free to commit after changing that.

Thank you for addressing all of the feedback.

{code}
  /**
   * Internal unchecked method used to remove a CachePool. Called directly when
   * reloading CacheManager state from the FSImage or edit log.
   * 
   * @param pool to be added
   */
  void unprotectedAddCachePool(CachePool pool) {
{code}


 Persist CacheManager state in the edit log
 --

 Key: HDFS-5119
 URL: https://issues.apache.org/jira/browse/HDFS-5119
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Attachments: hdfs-5119-1.patch, hdfs-5119-2.patch, hdfs-5119-3.patch


 CacheManager state should be persisted in the edit log.  At the moment, this 
 state consists of information about cache pools and cache directives.  It's 
 not necessary to persist any information about what is cached on the 
 DataNodes at any particular moment, since this changes all the time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5299) DFS client hangs in updatePipeline RPC when failover happened

2013-10-04 Thread Uma Maheswara Rao G (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786385#comment-13786385
 ] 

Uma Maheswara Rao G commented on HDFS-5299:
---

Nit:
{code}
cluster = new MiniDFSCluster.Builder(new Configuration())
+.nnTopology(MiniDFSNNTopology.simpleHATopology()).numDataNodes(1)
+.build();
{code}
Please reuse the existing conf object, you need not create new one for it.

Also please add small javadoc for the test.

{code}
 CacheEntryWithPayload cacheEntry = RetryCache.waitForCompletion(retryCache,
null);
if (cacheEntry != null  cacheEntry.isSuccess()) {
  return (String) cacheEntry.getPayload();
}
final FSPermissionChecker pc = getPermissionChecker();
{code}
Here if getPermissionChecker throws exception, then similar situation can occur 
for that call?  We will not retry for this exception I think, but the pattern 
to wait for retry cache and setting state should be proper order to avoid 
situations like this.

 DFS client hangs in updatePipeline RPC when failover happened
 -

 Key: HDFS-5299
 URL: https://issues.apache.org/jira/browse/HDFS-5299
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5299.patch


 DFSClient got hanged in updatedPipeline call to namenode when the failover 
 happened at exactly sametime.
 When we digged down, issue found to be with handling the RetryCache in 
 updatePipeline.
 Here are the steps :
 1. Client was writing slowly.
 2. One of the datanode was down and updatePipeline was called to ANN.
 3. Call reached the ANN, while processing updatePipeline call it got shutdown.
 3. Now Client retried (Since the api marked as AtMostOnce) to another 
 NameNode. at that time still NN was in STANDBY and got StandbyException.
 4. Now one more time client failover happened. 
 5. Now SNN became Active.
 6. Client called to current ANN again for updatePipeline, 
 Now client call got hanged in NN, waiting for the cached call with same 
 callid to be over. But this cached call is already got over last time with 
 StandbyException.
 Conclusion :
 Always whenever the new entry is added to cache we need to update the result 
 of the call before returning the call or throwing exception.
 I can see similar issue multiple RPCs in FSNameSystem.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786397#comment-13786397
 ] 

Jing Zhao commented on HDFS-5300:
-

The patch looks great. Only one nit: we can rename the new test from 
testDeleteSnapShotWithPermissionsDisabled to 
testDeleteSnapshotWithPermissionsDisabled to be consistent with all other 
test cases.

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5119) Persist CacheManager state in the edit log

2013-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5119:
--

Attachment: hdfs-5119-4.patch

Good catch, thanks Chris. v4 patch attached with the typo fixed, will commit 
shortly.

 Persist CacheManager state in the edit log
 --

 Key: HDFS-5119
 URL: https://issues.apache.org/jira/browse/HDFS-5119
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Attachments: hdfs-5119-1.patch, hdfs-5119-2.patch, hdfs-5119-3.patch, 
 hdfs-5119-4.patch


 CacheManager state should be persisted in the edit log.  At the moment, this 
 state consists of information about cache pools and cache directives.  It's 
 not necessary to persist any information about what is cached on the 
 DataNodes at any particular moment, since this changes all the time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5300:


Attachment: HDFS-5300.patch

Upload a patch with the nit fix to save time.

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch, HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5119) Persist CacheManager state in the edit log

2013-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-5119.
---

  Resolution: Fixed
   Fix Version/s: HDFS-4949
Target Version/s: HDFS-4949

Committed to branch, thanks again Chris.

 Persist CacheManager state in the edit log
 --

 Key: HDFS-5119
 URL: https://issues.apache.org/jira/browse/HDFS-5119
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: HDFS-4949

 Attachments: hdfs-5119-1.patch, hdfs-5119-2.patch, hdfs-5119-3.patch, 
 hdfs-5119-4.patch


 CacheManager state should be persisted in the edit log.  At the moment, this 
 state consists of information about cache pools and cache directives.  It's 
 not necessary to persist any information about what is cached on the 
 DataNodes at any particular moment, since this changes all the time.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5301) adding block pool % for each namespace on federated namenode webUI

2013-10-04 Thread Siqi Li (JIRA)
Siqi Li created HDFS-5301:
-

 Summary: adding block pool % for each namespace on federated 
namenode webUI
 Key: HDFS-5301
 URL: https://issues.apache.org/jira/browse/HDFS-5301
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786418#comment-13786418
 ] 

Jing Zhao commented on HDFS-5300:
-

Besides of the nit +1 for the patch.

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch, HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5301) adding block pool % for each namespace on federated namenode webUI

2013-10-04 Thread Siqi Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siqi Li updated HDFS-5301:
--

Status: Patch Available  (was: Open)

 adding block pool % for each namespace on federated namenode webUI
 --

 Key: HDFS-5301
 URL: https://issues.apache.org/jira/browse/HDFS-5301
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Priority: Minor
 Attachments: HDFS-5301-v1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5301) adding block pool % for each namespace on federated namenode webUI

2013-10-04 Thread Siqi Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siqi Li updated HDFS-5301:
--

Attachment: HDFS-5301-v1.patch

 adding block pool % for each namespace on federated namenode webUI
 --

 Key: HDFS-5301
 URL: https://issues.apache.org/jira/browse/HDFS-5301
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Priority: Minor
 Attachments: HDFS-5301-v1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5259) Support client which combines appended data with old data before sends it to NFS server

2013-10-04 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HDFS-5259:
-

Attachment: HDFS-5259.004.patch

HDFS-5259.004.patch is extended from HDFS-5259.001.patch with unit test and bug 
fixes to OffsetRange class.

 Support client which combines appended data with old data before sends it to 
 NFS server
 ---

 Key: HDFS-5259
 URL: https://issues.apache.org/jira/browse/HDFS-5259
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Yesha Vora
Assignee: Brandon Li
 Attachments: HDFS-5259.000.patch, HDFS-5259.001.patch, 
 HDFS-5259.003.patch, HDFS-5259.004.patch


 The append does not work with some Linux client. The Client gets 
 Input/output Error when it tries to append. And NFS server considers it as 
 random write and fails the request.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5283) NN not coming out of startup safemode due to under construction blocks only inside snapshots also counted in safemode threshhold

2013-10-04 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786427#comment-13786427
 ] 

Jing Zhao commented on HDFS-5283:
-

bq. instead used ((BlockInfoUnderConstruction) 
storedBlock).getNumExpectedLocations(), so test was passing. 

This should also work actually. I failed to get the actual behavior when 
generating the new patch. I think your fix there should be better. 

 NN not coming out of startup safemode due to under construction blocks only 
 inside snapshots also counted in safemode threshhold
 

 Key: HDFS-5283
 URL: https://issues.apache.org/jira/browse/HDFS-5283
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5283.000.patch, HDFS-5283.patch, HDFS-5283.patch


 This is observed in one of our env:
 1. A MR Job was running which has created some temporary files and was 
 writing to them.
 2. Snapshot was taken
 3. And Job was killed and temporary files were deleted.
 4. Namenode restarted.
 5. After restart Namenode was in safemode waiting for blocks
 Analysis
 -
 1. Since the snapshot taken also includes the temporary files which were 
 open, and later original files are deleted.
 2. UnderConstruction blocks count was taken from leases. not considered the 
 UC blocks only inside snapshots
 3. So safemode threshold count was more and NN did not come out of safemode



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5283) NN not coming out of startup safemode due to under construction blocks only inside snapshots also counted in safemode threshhold

2013-10-04 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786429#comment-13786429
 ] 

Jing Zhao commented on HDFS-5283:
-

bq.  get the actual behavior 

I mean,  get the actual result of the getNumExpectedLocations call.

 NN not coming out of startup safemode due to under construction blocks only 
 inside snapshots also counted in safemode threshhold
 

 Key: HDFS-5283
 URL: https://issues.apache.org/jira/browse/HDFS-5283
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: snapshots
Affects Versions: 3.0.0, 2.1.1-beta
Reporter: Vinay
Assignee: Vinay
Priority: Blocker
 Attachments: HDFS-5283.000.patch, HDFS-5283.patch, HDFS-5283.patch


 This is observed in one of our env:
 1. A MR Job was running which has created some temporary files and was 
 writing to them.
 2. Snapshot was taken
 3. And Job was killed and temporary files were deleted.
 4. Namenode restarted.
 5. After restart Namenode was in safemode waiting for blocks
 Analysis
 -
 1. Since the snapshot taken also includes the temporary files which were 
 open, and later original files are deleted.
 2. UnderConstruction blocks count was taken from leases. not considered the 
 UC blocks only inside snapshots
 3. So safemode threshold count was more and NN did not come out of safemode



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5259) Support client which combines appended data with old data before sends it to NFS server

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786460#comment-13786460
 ] 

Hadoop QA commented on HDFS-5259:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606849/HDFS-5259.004.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5098//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5098//console

This message is automatically generated.

 Support client which combines appended data with old data before sends it to 
 NFS server
 ---

 Key: HDFS-5259
 URL: https://issues.apache.org/jira/browse/HDFS-5259
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: nfs
Reporter: Yesha Vora
Assignee: Brandon Li
 Attachments: HDFS-5259.000.patch, HDFS-5259.001.patch, 
 HDFS-5259.003.patch, HDFS-5259.004.patch


 The append does not work with some Linux client. The Client gets 
 Input/output Error when it tries to append. And NFS server considers it as 
 random write and fails the request.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5293) Symlink resolution requires unnecessary RPCs

2013-10-04 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786533#comment-13786533
 ] 

Daryn Sharp commented on HDFS-5293:
---

That doesn't work in stacked/proxy filesystems or anything that wants to do 
custom filtering.  At a minimum, you'd need to make the NN chroot aware by 
defining / to be another path, treat .. out of the root as an unresolved 
link.  Then the NN has to deal with cyclic links, etc.  I'm not advocating any 
of that.

Resolving symlinks is purely a client side operation.  Think of unix: a mounted 
fs does not resolve symlinks.  Libc functions on the client do all the 
resolution.

 Symlink resolution requires unnecessary RPCs
 

 Key: HDFS-5293
 URL: https://issues.apache.org/jira/browse/HDFS-5293
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Priority: Critical

 When the NN encounters a symlink, it throws an {{UnresolvedLinkException}}.  
 This exception contains only the path that is a symlink.  The client issues 
 another RPC to obtain the link target, followed by another RPC with the link 
 target + remainder of the original path.
 {{UnresolvedLinkException}} should be returning both the link and the target 
 to avoid a costly and unnecessary intermediate RPC to obtain the link target.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786557#comment-13786557
 ] 

Hadoop QA commented on HDFS-5300:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606843/HDFS-5300.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5096//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5096//console

This message is automatically generated.

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Attachments: HDFS-5300.patch, HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5301) adding block pool % for each namespace on federated namenode webUI

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786570#comment-13786570
 ] 

Hadoop QA commented on HDFS-5301:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606846/HDFS-5301-v1.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5097//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5097//console

This message is automatically generated.

 adding block pool % for each namespace on federated namenode webUI
 --

 Key: HDFS-5301
 URL: https://issues.apache.org/jira/browse/HDFS-5301
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Priority: Minor
 Attachments: HDFS-5301-v1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5301) adding block pool % for each namespace on federated namenode webUI

2013-10-04 Thread Siqi Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786599#comment-13786599
 ] 

Siqi Li commented on HDFS-5301:
---

It simply divides the blockpool used by total capacity and displays the result 
on the webUI. It is straightforward, therefore no test case is needed.

 adding block pool % for each namespace on federated namenode webUI
 --

 Key: HDFS-5301
 URL: https://issues.apache.org/jira/browse/HDFS-5301
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.0.5-alpha
Reporter: Siqi Li
Priority: Minor
 Attachments: HDFS-5301-v1.patch






--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5302) Use implicit schemes in jsp

2013-10-04 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-5302:


 Summary: Use implicit schemes in jsp
 Key: HDFS-5302
 URL: https://issues.apache.org/jira/browse/HDFS-5302
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai


The JSPs in HDFS calls HttpConfig.getScheme() to determine which scheme (http / 
https) to be put into the link. This can be done by putting /// in the JSP 
instead, thus picking a scheme can no longer rely on HttpConfig.getScheme() any 
more.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5302) Use implicit schemes in jsp

2013-10-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5302:
-

Status: Patch Available  (was: Open)

 Use implicit schemes in jsp
 ---

 Key: HDFS-5302
 URL: https://issues.apache.org/jira/browse/HDFS-5302
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5302.000.patch


 The JSPs in HDFS calls HttpConfig.getScheme() to determine which scheme (http 
 / https) to be put into the link. This can be done by putting /// in the 
 JSP instead, thus picking a scheme can no longer rely on 
 HttpConfig.getScheme() any more.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5302) Use implicit schemes in jsp

2013-10-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5302:
-

Attachment: HDFS-5302.000.patch

 Use implicit schemes in jsp
 ---

 Key: HDFS-5302
 URL: https://issues.apache.org/jira/browse/HDFS-5302
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5302.000.patch


 The JSPs in HDFS calls HttpConfig.getScheme() to determine which scheme (http 
 / https) to be put into the link. This can be done by putting /// in the 
 JSP instead, thus picking a scheme can no longer rely on 
 HttpConfig.getScheme() any more.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5303) A symlink within a snapshot pointing to a target outside the snapshot root can cause the snapshot contents to appear to change.

2013-10-04 Thread Chris Nauroth (JIRA)
Chris Nauroth created HDFS-5303:
---

 Summary: A symlink within a snapshot pointing to a target outside 
the snapshot root can cause the snapshot contents to appear to change.
 Key: HDFS-5303
 URL: https://issues.apache.org/jira/browse/HDFS-5303
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Chris Nauroth


A snapshot is supposed to represent the point-in-time state of a directory.  
However, if the directory contains a symlink that targets a file outside the 
snapshot root, then the snapshot contents will appear to change if someone 
changes the target file (i.e. delete or append).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5277) hadoop fs -expunge does not work for federated namespace

2013-10-04 Thread Joep Rottinghuis (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786626#comment-13786626
 ] 

Joep Rottinghuis commented on HDFS-5277:


Interestingly enough there is a workaround for the expunge by passing the URL 
to the fs command.
The help (boths docs and when typing hdfs dfs does not seem to show that 
additional optional argument.
w/o looking at the code users won't know about this workaround.

 hadoop fs -expunge does not work for federated namespace 
 -

 Key: HDFS-5277
 URL: https://issues.apache.org/jira/browse/HDFS-5277
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.5-alpha
Reporter: Vrushali C

 We noticed that hadoop fs -expunge command does not work across federated 
 namespace. This seems to look at only /user/username/.Trash instead of 
 traversing all available namespace and expunging from individual namespace.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5300:


   Resolution: Fixed
Fix Version/s: 2.1.2-beta
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Vinay! I've committed this to trunk, branch-2 and branch-2.1-beta.

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.1.2-beta

 Attachments: HDFS-5300.patch, HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786654#comment-13786654
 ] 

Hudson commented on HDFS-5300:
--

SUCCESS: Integrated in Hadoop-trunk-Commit #4540 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/4540/])
HDFS-5300. FSNameSystem#deleteSnapshot() should not check owner in case of 
permissions disabled. Contributed by Vinay. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1529294)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.1.2-beta

 Attachments: HDFS-5300.patch, HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5190) move cache pool manipulation commands to dfsadmin, add to TestHDFSCLI

2013-10-04 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786691#comment-13786691
 ] 

Colin Patrick McCabe commented on HDFS-5190:


+1; thanks, Andrew.

 move cache pool manipulation commands to dfsadmin, add to TestHDFSCLI
 -

 Key: HDFS-5190
 URL: https://issues.apache.org/jira/browse/HDFS-5190
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Attachments: hdfs-5190-1.patch, hdfs-5190-2.patch


 As per the discussion in HDFS-5158, we should move the cache pool add, 
 remove, list commands into cacheadmin.  We also should write a unit test in 
 TestHDFSCLI for these commands.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5304) Expose if a block replica is cached in getFileBlockLocations

2013-10-04 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-5304:
-

 Summary: Expose if a block replica is cached in 
getFileBlockLocations
 Key: HDFS-5304
 URL: https://issues.apache.org/jira/browse/HDFS-5304
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Andrew Wang
Assignee: Andrew Wang


We need to expose which replicas of a block are cached so applications can 
place their tasks for memory-locality.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5190) Move cache pool related CLI commands to CacheAdmin

2013-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5190:
--

Summary: Move cache pool related CLI commands to CacheAdmin  (was: move 
cache pool manipulation commands to dfsadmin, add to TestHDFSCLI)

 Move cache pool related CLI commands to CacheAdmin
 --

 Key: HDFS-5190
 URL: https://issues.apache.org/jira/browse/HDFS-5190
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Attachments: hdfs-5190-1.patch, hdfs-5190-2.patch


 As per the discussion in HDFS-5158, we should move the cache pool add, 
 remove, list commands into cacheadmin.  We also should write a unit test in 
 TestHDFSCLI for these commands.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Resolved] (HDFS-5190) Move cache pool related CLI commands to CacheAdmin

2013-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-5190.
---

Resolution: Fixed

Committed to branch, thanks for the reviews Colin.

 Move cache pool related CLI commands to CacheAdmin
 --

 Key: HDFS-5190
 URL: https://issues.apache.org/jira/browse/HDFS-5190
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Attachments: hdfs-5190-1.patch, hdfs-5190-2.patch


 As per the discussion in HDFS-5158, we should move the cache pool add, 
 remove, list commands into cacheadmin.  We also should write a unit test in 
 TestHDFSCLI for these commands.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5190) Move cache pool related CLI commands to CacheAdmin

2013-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5190:
--

Fix Version/s: HDFS-4949

 Move cache pool related CLI commands to CacheAdmin
 --

 Key: HDFS-5190
 URL: https://issues.apache.org/jira/browse/HDFS-5190
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Colin Patrick McCabe
Assignee: Andrew Wang
 Fix For: HDFS-4949

 Attachments: hdfs-5190-1.patch, hdfs-5190-2.patch


 As per the discussion in HDFS-5158, we should move the cache pool add, 
 remove, list commands into cacheadmin.  We also should write a unit test in 
 TestHDFSCLI for these commands.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5297) Fix broken hyperlinks in HDFS document

2013-10-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5297:


Attachment: HDFS-5297.patch

Attaching a patch.

 Fix broken hyperlinks in HDFS document
 --

 Key: HDFS-5297
 URL: https://issues.apache.org/jira/browse/HDFS-5297
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Akira AJISAKA
Priority: Minor
 Fix For: 3.0.0, 2.1.2-beta

 Attachments: HDFS-5297.patch


 I found a lot of broken hyperlinks in HDFS document to be fixed.
 Ex.)
 In HdfsUserGuide.apt.vm, there is an broken hyperlinks as below
 {noformat}
For command usage, see {{{dfsadmin}}}.
 {noformat}
 It should be fixed to 
 {noformat}
For command usage, see 
 {{{../hadoop-common/CommandsManual.html#dfsadmin}dfsadmin}}.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5297) Fix broken hyperlinks in HDFS document

2013-10-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HDFS-5297:


   Fix Version/s: (was: 2.1.2-beta)
  (was: 3.0.0)
Assignee: Akira AJISAKA
Target Version/s: 3.0.0, 2.1.2-beta
  Status: Patch Available  (was: Open)

 Fix broken hyperlinks in HDFS document
 --

 Key: HDFS-5297
 URL: https://issues.apache.org/jira/browse/HDFS-5297
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.1.0-beta, 3.0.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HDFS-5297.patch


 I found a lot of broken hyperlinks in HDFS document to be fixed.
 Ex.)
 In HdfsUserGuide.apt.vm, there is an broken hyperlinks as below
 {noformat}
For command usage, see {{{dfsadmin}}}.
 {noformat}
 It should be fixed to 
 {noformat}
For command usage, see 
 {{{../hadoop-common/CommandsManual.html#dfsadmin}dfsadmin}}.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5302) Use implicit schemes in jsp

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786719#comment-13786719
 ] 

Hadoop QA commented on HDFS-5302:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606888/HDFS-5302.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.server.datanode.TestDatanodeJsp

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5099//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5099//console

This message is automatically generated.

 Use implicit schemes in jsp
 ---

 Key: HDFS-5302
 URL: https://issues.apache.org/jira/browse/HDFS-5302
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5302.000.patch


 The JSPs in HDFS calls HttpConfig.getScheme() to determine which scheme (http 
 / https) to be put into the link. This can be done by putting /// in the 
 JSP instead, thus picking a scheme can no longer rely on 
 HttpConfig.getScheme() any more.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5302) Use implicit schemes in jsp

2013-10-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5302:
-

Attachment: HDFS-5302.001.patch

 Use implicit schemes in jsp
 ---

 Key: HDFS-5302
 URL: https://issues.apache.org/jira/browse/HDFS-5302
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5302.000.patch, HDFS-5302.001.patch


 The JSPs in HDFS calls HttpConfig.getScheme() to determine which scheme (http 
 / https) to be put into the link. This can be done by putting /// in the 
 JSP instead, thus picking a scheme can no longer rely on 
 HttpConfig.getScheme() any more.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Moved] (HDFS-5305) Add https support in HDFS

2013-10-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas moved HADOOP-10023 to HDFS-5305:


 Target Version/s:   (was: 2.1.2-beta)
Affects Version/s: (was: 2.0.2-alpha)
   2.0.2-alpha
  Key: HDFS-5305  (was: HADOOP-10023)
  Project: Hadoop HDFS  (was: Hadoop Common)

 Add https support in HDFS
 -

 Key: HDFS-5305
 URL: https://issues.apache.org/jira/browse/HDFS-5305
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.0.2-alpha
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas

 This is the HDFS part of HADOOP-10022. This will serve as the umbrella jira 
 for all the https related cleanup in HDFS.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5180) Add time taken to process the command to audit log

2013-10-04 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5180:
-

Attachment: HDFS-5180.patch

I attach a patch file.
The request of the processing time that is longer than threshold outputs it in 
log. In this method, we can confirm a request to have abnormal possibilities.

 Add time taken to process the command to audit log
 --

 Key: HDFS-5180
 URL: https://issues.apache.org/jira/browse/HDFS-5180
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
 Attachments: HDFS-5180.patch


 Command and ugi are output now by audit log of NameNode. But it is not output 
 for the processing time of command to audit log.
 For example, we must check which command is a problem when a trouble such as 
 the slow down occurred in NameNode.
 It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5302) Use implicit schemes in jsp

2013-10-04 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HDFS-5302:
-

Attachment: HDFS-5302.002.patch

Updated the unit tests.

Changed URL in HDFS:

* Go back to DFS home in dfshealth.jsp / browsedirectory.jsp / browseblock.jsp
* Tail the file in browseblock.jsp
* Directory links  in browsedirectory.jsp

 Use implicit schemes in jsp
 ---

 Key: HDFS-5302
 URL: https://issues.apache.org/jira/browse/HDFS-5302
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HDFS-5302.000.patch, HDFS-5302.001.patch, 
 HDFS-5302.002.patch


 The JSPs in HDFS calls HttpConfig.getScheme() to determine which scheme (http 
 / https) to be put into the link. This can be done by putting /// in the 
 JSP instead, thus picking a scheme can no longer rely on 
 HttpConfig.getScheme() any more.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5180) Add time taken to process the command to audit log

2013-10-04 Thread Shinichi Yamashita (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shinichi Yamashita updated HDFS-5180:
-

Target Version/s: 3.0.0
  Status: Patch Available  (was: Open)

 Add time taken to process the command to audit log
 --

 Key: HDFS-5180
 URL: https://issues.apache.org/jira/browse/HDFS-5180
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
 Attachments: HDFS-5180.patch


 Command and ugi are output now by audit log of NameNode. But it is not output 
 for the processing time of command to audit log.
 For example, we must check which command is a problem when a trouble such as 
 the slow down occurred in NameNode.
 It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Created] (HDFS-5306) Datanode https port is not available in the namenode

2013-10-04 Thread Suresh Srinivas (JIRA)
Suresh Srinivas created HDFS-5306:
-

 Summary: Datanode https port is not available in the namenode
 Key: HDFS-5306
 URL: https://issues.apache.org/jira/browse/HDFS-5306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas


To enable https access, the datanode http server https port is needed in 
namenode web pages and redirects from the namenode. This jira adds an 
additional optional field to DatanodeIDProto in hdfs.proto and the 
corresponding DatanodeID java class.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5306) Datanode https port is not available at the namenode

2013-10-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-5306:
--

Summary: Datanode https port is not available at the namenode  (was: 
Datanode https port is not available in the namenode)

 Datanode https port is not available at the namenode
 

 Key: HDFS-5306
 URL: https://issues.apache.org/jira/browse/HDFS-5306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-5306.patch


 To enable https access, the datanode http server https port is needed in 
 namenode web pages and redirects from the namenode. This jira adds an 
 additional optional field to DatanodeIDProto in hdfs.proto and the 
 corresponding DatanodeID java class.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5306) Datanode https port is not available at the namenode

2013-10-04 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HDFS-5306:
--

Attachment: HDFS-5306.patch

Attached patch makes the changes proposed in the description.

 Datanode https port is not available at the namenode
 

 Key: HDFS-5306
 URL: https://issues.apache.org/jira/browse/HDFS-5306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-5306.patch


 To enable https access, the datanode http server https port is needed in 
 namenode web pages and redirects from the namenode. This jira adds an 
 additional optional field to DatanodeIDProto in hdfs.proto and the 
 corresponding DatanodeID java class.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5303) A symlink within a snapshot pointing to a target outside the snapshot root can cause the snapshot contents to appear to change.

2013-10-04 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786811#comment-13786811
 ] 

Colin Patrick McCabe commented on HDFS-5303:


I don't think this is a problem.  Symlinks are just references to things, not 
the things themselves.

Every other snapshotting filesystem (btrfs, ZFS, etc) allows snapshots to 
contain symlinks to paths outside the snapshot.  We currently do the same, and 
that should be fine.

 A symlink within a snapshot pointing to a target outside the snapshot root 
 can cause the snapshot contents to appear to change.
 ---

 Key: HDFS-5303
 URL: https://issues.apache.org/jira/browse/HDFS-5303
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Reporter: Chris Nauroth

 A snapshot is supposed to represent the point-in-time state of a directory.  
 However, if the directory contains a symlink that targets a file outside the 
 snapshot root, then the snapshot contents will appear to change if someone 
 changes the target file (i.e. delete or append).



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5306) Datanode https port is not available at the namenode

2013-10-04 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HDFS-5306:


Status: Patch Available  (was: Open)

 Datanode https port is not available at the namenode
 

 Key: HDFS-5306
 URL: https://issues.apache.org/jira/browse/HDFS-5306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-5306.patch


 To enable https access, the datanode http server https port is needed in 
 namenode web pages and redirects from the namenode. This jira adds an 
 additional optional field to DatanodeIDProto in hdfs.proto and the 
 corresponding DatanodeID java class.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5294) DistributedFileSystem#getFileLinkStatus should not fully qualify the link target

2013-10-04 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786825#comment-13786825
 ] 

Colin Patrick McCabe commented on HDFS-5294:


OK.  If I understand correctly, it sounds like you're proposing that 
{{getFileLinkStatus}} return symlinks with relative links as relative, rather 
than trying to qualify them.  I am +1 on this idea, but if we do it, we need to 
do it for all Filesystems, not just HDFS.

 DistributedFileSystem#getFileLinkStatus should not fully qualify the link 
 target
 

 Key: HDFS-5294
 URL: https://issues.apache.org/jira/browse/HDFS-5294
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp

 The NN returns a {{FileStatus}} containing the exact link target as specified 
 by the user at creation.  However, 
 {{DistributedFileSystem#getFileLinkStatus}} explicit overwrites the target 
 with the fully scheme qualified path lookup.  This causes multiple issues 
 such as:
 # Prevents clients from discerning if the target is relative or absolute
 # Mangles a target that is not intended to be a path
 # Causes incorrect resolution with multi-layered filesystems - ie. the link 
 should be resolved relative to a higher level fs (ie. viewfs, chroot, 
 filtered, etc)



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5180) Add time taken to process the command to audit log

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786828#comment-13786828
 ] 

Hadoop QA commented on HDFS-5180:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606938/HDFS-5180.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5103//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5103//console

This message is automatically generated.

 Add time taken to process the command to audit log
 --

 Key: HDFS-5180
 URL: https://issues.apache.org/jira/browse/HDFS-5180
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Affects Versions: 3.0.0
Reporter: Shinichi Yamashita
 Attachments: HDFS-5180.patch


 Command and ugi are output now by audit log of NameNode. But it is not output 
 for the processing time of command to audit log.
 For example, we must check which command is a problem when a trouble such as 
 the slow down occurred in NameNode.
 It should add the processing time to audit log to know the abnormal sign.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5297) Fix broken hyperlinks in HDFS document

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786830#comment-13786830
 ] 

Hadoop QA commented on HDFS-5297:
-

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606919/HDFS-5297.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5100//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5100//console

This message is automatically generated.

 Fix broken hyperlinks in HDFS document
 --

 Key: HDFS-5297
 URL: https://issues.apache.org/jira/browse/HDFS-5297
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HDFS-5297.patch


 I found a lot of broken hyperlinks in HDFS document to be fixed.
 Ex.)
 In HdfsUserGuide.apt.vm, there is an broken hyperlinks as below
 {noformat}
For command usage, see {{{dfsadmin}}}.
 {noformat}
 It should be fixed to 
 {noformat}
For command usage, see 
 {{{../hadoop-common/CommandsManual.html#dfsadmin}dfsadmin}}.
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5267) Modification field in LightWeightHashSet and LightWeightLinkedSet should't be volatile which cause wrong expectation on thread-safe visibility

2013-10-04 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HDFS-5267:
-

Attachment: HDFS-5267.patch

Rename the patch to the correct name HDFS-5267.patch

 Modification field in LightWeightHashSet and LightWeightLinkedSet should't be 
 volatile which cause wrong expectation on thread-safe visibility
 --

 Key: HDFS-5267
 URL: https://issues.apache.org/jira/browse/HDFS-5267
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.3.0

 Attachments: HADOOP-9980.patch, HDFS-5267.patch, HDFS-5276.patch


 LightWeightGSet should have a volatile modification field (like: 
 LightWeightHashSet or LightWeightLinkedSet) that is used to detect updates 
 while iterating so they can throw a ConcurrentModificationException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5306) Datanode https port is not available at the namenode

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786912#comment-13786912
 ] 

Hadoop QA commented on HDFS-5306:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606951/HDFS-5306.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract
  
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5104//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5104//console

This message is automatically generated.

 Datanode https port is not available at the namenode
 

 Key: HDFS-5306
 URL: https://issues.apache.org/jira/browse/HDFS-5306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-5306.patch


 To enable https access, the datanode http server https port is needed in 
 namenode web pages and redirects from the namenode. This jira adds an 
 additional optional field to DatanodeIDProto in hdfs.proto and the 
 corresponding DatanodeID java class.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5306) Datanode https port is not available at the namenode

2013-10-04 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786924#comment-13786924
 ] 

Haohui Mai commented on HDFS-5306:
--

It works for me, except that 
org.apache.hadoop.hdfs.protocolPB.PBHelper#convert(DatanodeID dn) needs an 
one-line patch:

{noformat}
.setInfoPort(dn.getInfoPort())
+.setInfoSecurePort(dn.getInfoSecurePort())
.setIpcPort(dn.getIpcPort()).build();
{noformat}

 Datanode https port is not available at the namenode
 

 Key: HDFS-5306
 URL: https://issues.apache.org/jira/browse/HDFS-5306
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: datanode, namenode
Reporter: Suresh Srinivas
Assignee: Suresh Srinivas
 Attachments: HDFS-5306.patch


 To enable https access, the datanode http server https port is needed in 
 namenode web pages and redirects from the namenode. This jira adds an 
 additional optional field to DatanodeIDProto in hdfs.proto and the 
 corresponding DatanodeID java class.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5300) FSNameSystem#deleteSnapshot() should not check owner in case of permissions disabled

2013-10-04 Thread Vinay (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786932#comment-13786932
 ] 

Vinay commented on HDFS-5300:
-

Thanks jing for review, updating the patch and commit. 

 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 

 Key: HDFS-5300
 URL: https://issues.apache.org/jira/browse/HDFS-5300
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0, 2.1.0-beta
Reporter: Vinay
Assignee: Vinay
 Fix For: 2.1.2-beta

 Attachments: HDFS-5300.patch, HDFS-5300.patch, HDFS-5300.patch


 FSNameSystem#deleteSnapshot() should not check owner in case of permissions 
 disabled
 {code:java}  checkOperation(OperationCategory.WRITE);
   if (isInSafeMode()) {
 throw new SafeModeException(
 Cannot delete snapshot for  + snapshotRoot, safeMode);
   }
   FSPermissionChecker pc = getPermissionChecker();
   checkOwner(pc, snapshotRoot);
   BlocksMapUpdateInfo collectedBlocks = new BlocksMapUpdateInfo();
   ListINode removedINodes = new ChunkedArrayListINode();
   dir.writeLock();{code}
 should check owner only in case of permissions enabled as its done for all 
 other operations.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (HDFS-5267) Modification field in LightWeightHashSet and LightWeightLinkedSet should't be volatile which cause wrong expectation on thread-safe visibility

2013-10-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HDFS-5267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13786935#comment-13786935
 ] 

Hadoop QA commented on HDFS-5267:
-

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12606960/HDFS-5267.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/5105//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5105//console

This message is automatically generated.

 Modification field in LightWeightHashSet and LightWeightLinkedSet should't be 
 volatile which cause wrong expectation on thread-safe visibility
 --

 Key: HDFS-5267
 URL: https://issues.apache.org/jira/browse/HDFS-5267
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Junping Du
Assignee: Junping Du
 Fix For: 2.3.0

 Attachments: HADOOP-9980.patch, HDFS-5267.patch, HDFS-5276.patch


 LightWeightGSet should have a volatile modification field (like: 
 LightWeightHashSet or LightWeightLinkedSet) that is used to detect updates 
 while iterating so they can throw a ConcurrentModificationException.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Work started] (HDFS-5304) Expose if a block replica is cached in getFileBlockLocations

2013-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDFS-5304 started by Andrew Wang.

 Expose if a block replica is cached in getFileBlockLocations
 

 Key: HDFS-5304
 URL: https://issues.apache.org/jira/browse/HDFS-5304
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-5304-1.patch


 We need to expose which replicas of a block are cached so applications can 
 place their tasks for memory-locality.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Updated] (HDFS-5304) Expose if a block replica is cached in getFileBlockLocations

2013-10-04 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-5304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HDFS-5304:
--

Attachment: hdfs-5304-1.patch

Patch attached, wasn't that bad. Added a boolean array to {{LocatedBlock}} 
which indicates if the location at the same index is cached. Added new checks 
to an existing test to validate.

 Expose if a block replica is cached in getFileBlockLocations
 

 Key: HDFS-5304
 URL: https://issues.apache.org/jira/browse/HDFS-5304
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Affects Versions: HDFS-4949
Reporter: Andrew Wang
Assignee: Andrew Wang
 Attachments: hdfs-5304-1.patch


 We need to expose which replicas of a block are cached so applications can 
 place their tasks for memory-locality.



--
This message was sent by Atlassian JIRA
(v6.1#6144)