[jira] [Work logged] (HDDS-1492) Generated chunk size name too long.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?focusedWorklogId=277270&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277270
 ]

ASF GitHub Bot logged work on HDDS-1492:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:53
Start Date: 16/Jul/19 06:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1084: HDDS-1492. 
Generated chunk size name too long.
URL: https://github.com/apache/hadoop/pull/1084#issuecomment-511690232
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 35 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 75 | Maven dependency ordering for branch |
   | +1 | mvninstall | 548 | trunk passed |
   | +1 | compile | 251 | trunk passed |
   | +1 | checkstyle | 65 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 811 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 161 | trunk passed |
   | 0 | spotbugs | 316 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 519 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | +1 | mvninstall | 466 | the patch passed |
   | +1 | compile | 271 | the patch passed |
   | +1 | javac | 271 | the patch passed |
   | +1 | checkstyle | 82 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 676 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 165 | the patch passed |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 298 | hadoop-hdds in the patch failed. |
   | -1 | unit | 2260 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 58 | The patch does not generate ASF License warnings. |
   | | | 7485 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1084 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2f64705f1436 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f77d54c |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/2/testReport/ |
   | Max. process+thread count | 5387 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/client hadoop-ozone/client U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1084/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277270)
Time Spent: 50m  (was: 40m)

> Generated chunk size name too long.
> ---
>
>

[jira] [Updated] (HDFS-14646) Standby NameNode should terminate the FsImage put process immediately if the peer NN is not in the appropriate state to receive an image.

2019-07-15 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Status: Patch Available  (was: Open)

> Standby NameNode should terminate the FsImage put process immediately if the 
> peer NN is not in the appropriate state to receive an image.
> -
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: blockedInWritingSocket.png, get1.png, get2.png, 
> largeSendQ.png
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies with an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> In a relatively large HDFS cluster, the size of FsImage can often reach about 
> 30GB. In this case, this invalid put brings two problems:
>  # Wasting time and bandwidth.
>  # Since the ImageServlet of the peer NN no longer receives the FsImage, the 
> socket Send-Q of the local SNN is very large, and the ImageUpload thread will 
> be blocked in writing socket for a long time, eventually causing the local 
> StandbyCheckpointer thread often blocked for several hours.
> *An example is as follows:*
>  In the following figure, the local NN 100.76.3.234 is a SNN, the peer NN 
> 100.76.3.170 is another SNN, and the 8080 is NN Http port. When the local SNN 
> starts to put the FsImage, 170 will reply with a NOT_ACTIVE_NAMENODE_FAILURE 
> error immediately. In this case, the local SNN should terminate put 
> immediately, but in fact, local SNN has to wait until the image has been 
> completely put to the peer NN,and then can read the response.
>  # At this time, since the ImageServlet of the peer NN no longer receives the 
> FsImage, the socket Send-Q of the local SNN is very large:          
> !largeSendQ.png!
>       2. Moreover, the local SNN's ImageUpload thread will be blocked in 
> writing socket for a long time:
>           !blockedInWritingSocket.png! .
>  
>      3. Eventually, the StandbyCheckpointer thread of local SNN is waiting 
> for the execution result of the ImageUpload thread, blocking in Future.get(), 
> and the blocking time may be as long as several hours:
>             !get1.png!
>                            
>        !get2.png!
>  
>  
> *Solution:*
>  When the local SNN plans to put a FsImage to the peer NN, it need to test 
> whether he really need to put it at this time. The test process is:
>  # Establish an HTTP connection with the peer NN, send the put request, and 
> then immediately read the response (this is the key point). If the peer NN 
> replies any of the following errors (TransferResult.AUTHENTICATION_FAILURE, 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult. 
> OLD_TRANSACTION_ID_FAILURE), immediately terminate the put process.
>  # If the peer NN is indeed the Active NameNode AND it's now in the 
> appropriate state to receive an image, it will reply an HTTP response 410 
> (HttpServletResponse.SC_GONE, which is TransferResult.UNEXPECTED_FAILURE). At 
> this time, the local SNN can really begin to put the image.
> *Note:*
>  This problem needs to be reproduced in a large cluster (the size of FsImage 
> in our cluster is about 30GB). Therefore, unit testing is difficult to write. 
> In our cluster, after the modification, the problem has been solved and there 
> is no such thing as a large backlog of Send-Q.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277245&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277245
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746991
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277245)
Time Spent: 15h 10m  (was: 15h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 15h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277247&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277247
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303747003
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -280,9 +283,10 @@ public void testCheckAccessForPrefix() throws Exception {
 .setStoreType(OZONE)
 .build();
 
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277247)
Time Spent: 15.5h  (was: 15h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 15.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277240&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277240
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746939
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277240)
Time Spent: 14h 20m  (was: 14h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 14h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277251&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277251
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303747037
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -455,8 +463,9 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 
 FileEncryptionInfo encInfo;
 metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+OmBucketInfo bucketInfo;
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277251)
Time Spent: 16h 10m  (was: 16h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 16h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277241&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277241
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746964
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights,
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277241)
Time Spent: 14.5h  (was: 14h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 14.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277244&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277244
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746986
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -116,9 +135,14 @@ public void setAcls(List acls) throws 
OMException {
   // Add a new acl to the map
   public void removeAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.remove(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277244)
Time Spent: 15h  (was: 14h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 15h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277246&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277246
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746996
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAclsProto(
+  List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.map(OzoneAcl::toProtobufWithAccessType).collect(Collectors.toList());
+  }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   *
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAcls(List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.collect(Collectors.toList());
+  }
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277246)
Time Spent: 15h 20m  (was: 15h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 15h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277250&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277250
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303747034
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -920,7 +956,7 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 String keyName = keyArgs.getKeyName();
 
 metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
-validateS3Bucket(volumeName, bucketName);
+OmBucketInfo bucketInfo = validateS3Bucket(volumeName, bucketName);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277250)
Time Spent: 16h  (was: 15h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 16h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277239&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277239
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746946
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
+// Check if acl string contains scope info.
+if(parts[2].matches(ACL_SCOPE_REGEX)) {
+  int indexOfOpenBracket = parts[2].indexOf("[");
+  bits = parts[2].substring(0, indexOfOpenBracket);
+  aclScope = AclScope.valueOf(parts[2].substring(indexOfOpenBracket + 1,
+  parts[2].indexOf("]")));
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277239)
Time Spent: 14h 10m  (was: 14h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 14h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277243&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277243
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746978
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,70 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a ->
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
 return acls;
   }
 
   // Add a new acl to the map
   public void addAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.add(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277243)
Time Spent: 14h 50m  (was: 14h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 14h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277242&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277242
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303746970
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights,
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
+   * @param protoAcl
+   * @return OzoneAcl
+   * */
+  public static OzoneAcl fromProtobufWithAccessType(OzoneAclInfo protoAcl) {
+BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
+return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
+protoAcl.getName(), aclRights, AclScope.ACCESS);
   }
 
+  /**
+   * Helper function to convert an {@link OzoneAcl} to proto message of type
+   * {@link OzoneAclInfo} with acl scope of type ACCESS.
+   *
+   * @param acl
+   * @return OzoneAclInfo
+   * */
+  public static OzoneAclInfo toProtobufWithAccessType(OzoneAcl acl) {
+OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
+.setName(acl.getName())
+.setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.ACCESS)
+.setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
+return builder.build();
+  }
+
+  public AclScope getAclScope() {
+return aclScope;
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277242)
Time Spent: 14h 40m  (was: 14.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 14h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277248&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277248
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303747009
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -242,9 +243,10 @@ public void testCheckAccessForVolume() throws Exception {
   @Test
   public void testCheckAccessForBucket() throws Exception {
 
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
+OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl, 
+ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277248)
Time Spent: 15h 40m  (was: 15.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 15h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277249&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277249
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303747027
 
 

 ##
 File path: 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
 ##
 @@ -71,6 +70,8 @@
 import java.util.Objects;
 import java.util.concurrent.TimeUnit;
 
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277249)
Time Spent: 15h 50m  (was: 15h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 15h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277252&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277252
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 06:37
Start Date: 16/Jul/19 06:37
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303747047
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -472,7 +481,8 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 if (keyInfo == null) {
   // the key does not exist, create a new object, the new blocks are the
   // version 0
-  keyInfo = createKeyInfo(args, locations, factor, type, size, encInfo);
+  keyInfo = createKeyInfo(args, locations, factor, type, size, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277252)
Time Spent: 16h 20m  (was: 16h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 16h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14646) Standby NameNode should terminate the FsImage put process immediately if the peer NN is not in the appropriate state to receive an image.

2019-07-15 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Status: Open  (was: Patch Available)

> Standby NameNode should terminate the FsImage put process immediately if the 
> peer NN is not in the appropriate state to receive an image.
> -
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: blockedInWritingSocket.png, get1.png, get2.png, 
> largeSendQ.png
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies with an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> In a relatively large HDFS cluster, the size of FsImage can often reach about 
> 30GB. In this case, this invalid put brings two problems:
>  # Wasting time and bandwidth.
>  # Since the ImageServlet of the peer NN no longer receives the FsImage, the 
> socket Send-Q of the local SNN is very large, and the ImageUpload thread will 
> be blocked in writing socket for a long time, eventually causing the local 
> StandbyCheckpointer thread often blocked for several hours.
> *An example is as follows:*
>  In the following figure, the local NN 100.76.3.234 is a SNN, the peer NN 
> 100.76.3.170 is another SNN, and the 8080 is NN Http port. When the local SNN 
> starts to put the FsImage, 170 will reply with a NOT_ACTIVE_NAMENODE_FAILURE 
> error immediately. In this case, the local SNN should terminate put 
> immediately, but in fact, local SNN has to wait until the image has been 
> completely put to the peer NN,and then can read the response.
>  # At this time, since the ImageServlet of the peer NN no longer receives the 
> FsImage, the socket Send-Q of the local SNN is very large:          
> !largeSendQ.png!
>       2. Moreover, the local SNN's ImageUpload thread will be blocked in 
> writing socket for a long time:
>           !blockedInWritingSocket.png! .
>  
>      3. Eventually, the StandbyCheckpointer thread of local SNN is waiting 
> for the execution result of the ImageUpload thread, blocking in Future.get(), 
> and the blocking time may be as long as several hours:
>             !get1.png!
>                            
>        !get2.png!
>  
>  
> *Solution:*
>  When the local SNN plans to put a FsImage to the peer NN, it need to test 
> whether he really need to put it at this time. The test process is:
>  # Establish an HTTP connection with the peer NN, send the put request, and 
> then immediately read the response (this is the key point). If the peer NN 
> replies any of the following errors (TransferResult.AUTHENTICATION_FAILURE, 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult. 
> OLD_TRANSACTION_ID_FAILURE), immediately terminate the put process.
>  # If the peer NN is indeed the Active NameNode AND it's now in the 
> appropriate state to receive an image, it will reply an HTTP response 410 
> (HttpServletResponse.SC_GONE, which is TransferResult.UNEXPECTED_FAILURE). At 
> this time, the local SNN can really begin to put the image.
> *Note:*
>  This problem needs to be reproduced in a large cluster (the size of FsImage 
> in our cluster is about 30GB). Therefore, unit testing is difficult to write. 
> In our cluster, after the modification, the problem has been solved and there 
> is no such thing as a large backlog of Send-Q.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14646) Standby NameNode should terminate the FsImage put process immediately if the peer NN is not in the appropriate state to receive an image.

2019-07-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885871#comment-16885871
 ] 

Hadoop QA commented on HDFS-14646:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} HDFS-14646 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HDFS-14646 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27234/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Standby NameNode should terminate the FsImage put process immediately if the 
> peer NN is not in the appropriate state to receive an image.
> -
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: blockedInWritingSocket.png, get1.png, get2.png, 
> largeSendQ.png
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies with an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> In a relatively large HDFS cluster, the size of FsImage can often reach about 
> 30GB. In this case, this invalid put brings two problems:
>  # Wasting time and bandwidth.
>  # Since the ImageServlet of the peer NN no longer receives the FsImage, the 
> socket Send-Q of the local SNN is very large, and the ImageUpload thread will 
> be blocked in writing socket for a long time, eventually causing the local 
> StandbyCheckpointer thread often blocked for several hours.
> *An example is as follows:*
>  In the following figure, the local NN 100.76.3.234 is a SNN, the peer NN 
> 100.76.3.170 is another SNN, and the 8080 is NN Http port. When the local SNN 
> starts to put the FsImage, 170 will reply with a NOT_ACTIVE_NAMENODE_FAILURE 
> error immediately. In this case, the local SNN should terminate put 
> immediately, but in fact, local SNN has to wait until the image has been 
> completely put to the peer NN,and then can read the response.
>  # At this time, since the ImageServlet of the peer NN no longer receives the 
> FsImage, the socket Send-Q of the local SNN is very large:          
> !largeSendQ.png!
>       2. Moreover, the local SNN's ImageUpload thread will be blocked in 
> writing socket for a long time:
>           !blockedInWritingSocket.png! .
>  
>      3. Eventually, the StandbyCheckpointer thread of local SNN is waiting 
> for the execution result of the ImageUpload thread, blocking in Future.get(), 
> and the blocking time may be as long as several hours:
>             !get1.png!
>                            
>        !get2.png!
>  
>  
> *Solution:*
>  When the local SNN plans to put a FsImage to the peer NN, it need to test 
> whether he really need to put it at this time. The test process is:
>  # Establish an HTTP connection with the peer NN, send the put request, and 
> then immediately read the response (this is the key point). If the peer NN 
> replies any of the following errors (TransferResult.AUTHENTICATION_FAILURE, 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult. 
> OLD_TRANSACTION_ID_FAILURE), immediately terminate the put process.
>  # If the peer NN is indeed the Active NameNode AND it's now in the 
> appropriate state to receive an image, it will reply an HTTP response 410 
> (HttpServletResponse.SC_GONE, which is TransferResult.UNEXPECTED_FAILURE). At 
> this time, the local SNN can really begin to put the image.
> *Note:*
>  This problem needs to be reproduced in a large cluster (the size of FsImage 
> in our cluster is about 30GB). Therefore, unit testing is difficult to write. 
> In our cluster, after the modification, the problem has been solved and there 
> is no such thing as a large backlog of Send-Q.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-

[jira] [Updated] (HDFS-14646) Standby NameNode should terminate the FsImage put process immediately if the peer NN is not in the appropriate state to receive an image.

2019-07-15 Thread Xudong Cao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14646?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xudong Cao updated HDFS-14646:
--
Status: Patch Available  (was: Open)

> Standby NameNode should terminate the FsImage put process immediately if the 
> peer NN is not in the appropriate state to receive an image.
> -
>
> Key: HDFS-14646
> URL: https://issues.apache.org/jira/browse/HDFS-14646
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.1.2
>Reporter: Xudong Cao
>Assignee: Xudong Cao
>Priority: Major
> Attachments: blockedInWritingSocket.png, get1.png, get2.png, 
> largeSendQ.png
>
>
> *Problem Description:*
>  In the multi-NameNode scenario, when a SNN uploads a FsImage, it will put 
> the image to all other NNs (whether the peer NN is an ANN or not), and even 
> if the peer NN immediately replies with an error (such as 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult 
> .OLD_TRANSACTION_ID_FAILURE, etc.), the local SNN will not terminate the put 
> process immediately, but will put the FsImage completely to the peer NN, and 
> will not read the peer NN's reply until the put is completed.
> In a relatively large HDFS cluster, the size of FsImage can often reach about 
> 30GB. In this case, this invalid put brings two problems:
>  # Wasting time and bandwidth.
>  # Since the ImageServlet of the peer NN no longer receives the FsImage, the 
> socket Send-Q of the local SNN is very large, and the ImageUpload thread will 
> be blocked in writing socket for a long time, eventually causing the local 
> StandbyCheckpointer thread often blocked for several hours.
> *An example is as follows:*
>  In the following figure, the local NN 100.76.3.234 is a SNN, the peer NN 
> 100.76.3.170 is another SNN, and the 8080 is NN Http port. When the local SNN 
> starts to put the FsImage, 170 will reply with a NOT_ACTIVE_NAMENODE_FAILURE 
> error immediately. In this case, the local SNN should terminate put 
> immediately, but in fact, local SNN has to wait until the image has been 
> completely put to the peer NN,and then can read the response.
>  # At this time, since the ImageServlet of the peer NN no longer receives the 
> FsImage, the socket Send-Q of the local SNN is very large:          
> !largeSendQ.png!
>       2. Moreover, the local SNN's ImageUpload thread will be blocked in 
> writing socket for a long time:
>           !blockedInWritingSocket.png! .
>  
>      3. Eventually, the StandbyCheckpointer thread of local SNN is waiting 
> for the execution result of the ImageUpload thread, blocking in Future.get(), 
> and the blocking time may be as long as several hours:
>             !get1.png!
>                            
>        !get2.png!
>  
>  
> *Solution:*
>  When the local SNN plans to put a FsImage to the peer NN, it need to test 
> whether he really need to put it at this time. The test process is:
>  # Establish an HTTP connection with the peer NN, send the put request, and 
> then immediately read the response (this is the key point). If the peer NN 
> replies any of the following errors (TransferResult.AUTHENTICATION_FAILURE, 
> TransferResult.NOT_ACTIVE_NAMENODE_FAILURE, TransferResult. 
> OLD_TRANSACTION_ID_FAILURE), immediately terminate the put process.
>  # If the peer NN is indeed the Active NameNode AND it's now in the 
> appropriate state to receive an image, it will reply an HTTP response 410 
> (HttpServletResponse.SC_GONE, which is TransferResult.UNEXPECTED_FAILURE). At 
> this time, the local SNN can really begin to put the image.
> *Note:*
>  This problem needs to be reproduced in a large cluster (the size of FsImage 
> in our cluster is about 30GB). Therefore, unit testing is difficult to write. 
> In our cluster, after the modification, the problem has been solved and there 
> is no such thing as a large backlog of Send-Q.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1808) TestRatisPipelineCreateAndDestory#testPipelineCreationOnNodeRestart times out

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1808:
-

 Summary: 
TestRatisPipelineCreateAndDestory#testPipelineCreationOnNodeRestart times out
 Key: HDDS-1808
 URL: https://issues.apache.org/jira/browse/HDDS-1808
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


{code:java}
Error Message
test timed out after 3 milliseconds
Stacktrace
java.lang.Exception: test timed out after 3 milliseconds
at java.lang.Thread.sleep(Native Method)
at 
org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382)
at 
org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory.waitForPipelines(TestRatisPipelineCreateAndDestory.java:126)
at 
org.apache.hadoop.hdds.scm.pipeline.TestRatisPipelineCreateAndDestory.testPipelineCreationOnNodeRestart(TestRatisPipelineCreateAndDestory.java:121)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1807) TestWatchForCommit#testWatchForCommitForRetryfailure fails as a result of no leader election for extended period of time

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1807:
-

 Summary: TestWatchForCommit#testWatchForCommitForRetryfailure 
fails as a result of no leader election for extended period of time 
 Key: HDDS-1807
 URL: https://issues.apache.org/jira/browse/HDDS-1807
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


{code:java}
org.apache.ratis.protocol.RaftRetryFailureException: Failed 
RaftClientRequest:client-6C83DC527A4C->73bdd98d-b003-44ff-a45b-bd12dfd50509@group-75C642DF7AE9,
 cid=55, seq=1*, RW, 
org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$407/213850519@1a8843a2 
for 10 attempts with RetryLimited(maxAttempts=10, sleepTime=1000ms)
Stacktrace
java.util.concurrent.ExecutionException: 
org.apache.ratis.protocol.RaftRetryFailureException: Failed 
RaftClientRequest:client-6C83DC527A4C->73bdd98d-b003-44ff-a45b-bd12dfd50509@group-75C642DF7AE9,
 cid=55, seq=1*, RW, 
org.apache.hadoop.hdds.scm.XceiverClientRatis$$Lambda$407/213850519@1a8843a2 
for 10 attempts with RetryLimited(maxAttempts=10, sleepTime=1000ms)
at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
at 
org.apache.hadoop.ozone.client.rpc.TestWatchForCommit.testWatchForCommitForRetryfailure(TestWatchForCommit.java:345)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}
The client here retries times with a delay of 1 sec between each retry but 
leader eleactiocouldnot complete.
{code:java}
2019-07-12 19:30:46,451 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 
client-6C83DC527A4C->5931fd83-b899-480e-b15a-ecb8e7f7dd46: receive 
RaftClientReply:client-6C83DC527A4C->5931fd83-b899-480e-b15a-ecb8e7f7dd46@group-75C642DF7AE9,
 cid=55, FAILED org.apache.ratis.protocol.NotLeaderException: Server 
5931fd83-b899-480e-b15a-ecb8e7f7dd46 is not the leader (null). Request must be 
sent to leader., logIndex=0, commits[5931fd83-b899-480e-b15a-ecb8e7f7dd46:c-1]
2019-07-12 19:30:47,469 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 
client-6C83DC527A4C->d83929f1-c4db-499d-b67f-ad7f10dd7dde: receive 
RaftClientReply:client-6C83DC527A4C->d83929f1-c4db-499d-b67f-ad7f10dd7dde@group-75C642DF7AE9,
 cid=55, FAILED org.apache.ratis.protocol.NotLeaderException: Server 
d83929f1-c4db-499d-b67f-ad7f10dd7dde is not the leader (null). Request must be 
sent to leader., logIndex=0, commits[d83929f1-c4db-499d-b67f-ad7f10dd7dde:c-1]
2019-07-12 19:30:48,504 INFO  client.GrpcClientProtocolClient 
(GrpcClientProtocolClient.java:onNext(255)) - 
clien

[jira] [Created] (HDDS-1806) TestDataValidateWithSafeByteOperations tests are failing

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1806:
-

 Summary: TestDataValidateWithSafeByteOperations tests are failing
 Key: HDDS-1806
 URL: https://issues.apache.org/jira/browse/HDDS-1806
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


 
{code:java}
Unexpected Storage Container Exception: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist

Stacktrace
java.io.IOException: Unexpected Storage Container Exception: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.setIoException(BlockOutputStream.java:549)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:540)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$2(BlockOutputStream.java:615)
 at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:602) 
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:577)
 at 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:442)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748) Caused by: 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException: 
ContainerID 3 does not exist at 
org.apache.hadoop.hdds.scm.storage.ContainerProtocolCalls.validateContainerResponse(ContainerProtocolCalls.java:536)
 at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.validateResponse(BlockOutputStream.java:537)
 ... 7 more
{code}
The error propagated to client is erroneous. The container creation failed as a 
result disk full   condition but never propagated to client.

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1493) Download and Import Container replicator fails.

2019-07-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1493:
-

Assignee: Hrishikesh Gadre  (was: Nanda kumar)

[~hgadre] Would you be able to take a look at this?

> Download and Import Container replicator fails.
> ---
>
> Key: HDDS-1493
> URL: https://issues.apache.org/jira/browse/HDDS-1493
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Hrishikesh Gadre
>Priority: Blocker
> Attachments: ozone.log
>
>
> While running batch jobs (16 threads writing a lot of 10MB+ files), the 
> following error is seen in the SCM logs.
> {code}
> ERROR  - Can't import the downloaded container data id=317
> {code}
> It is unclear from the logs why this happens. Needs more investigation to 
> find the root cause.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1615) ManagedChannel references are being leaked in ReplicationSupervisor.java

2019-07-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1615:
-

Assignee: Hrishikesh Gadre

> ManagedChannel references are being leaked in ReplicationSupervisor.java
> 
>
> Key: HDDS-1615
> URL: https://issues.apache.org/jira/browse/HDDS-1615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Hrishikesh Gadre
>Priority: Major
>  Labels: MiniOzoneChaosCluster
>
> ManagedChannel references are being leaked in ReplicationSupervisor.java
> {code}
> May 30, 2019 8:10:56 AM 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference
>  cleanQueue
> SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=1495, 
> target=192.168.0.3:49868} was not shutdown properly!!! ~*~*~*
> Make sure to call shutdown()/shutdownNow() and wait until 
> awaitTermination() returns true.
> java.lang.RuntimeException: ManagedChannel allocation site
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference.(ManagedChannelOrphanWrapper.java:103)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:53)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.ManagedChannelOrphanWrapper.(ManagedChannelOrphanWrapper.java:44)
> at 
> org.apache.ratis.thirdparty.io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractManagedChannelImplBuilder.java:411)
> at 
> org.apache.hadoop.ozone.container.replication.GrpcReplicationClient.(GrpcReplicationClient.java:65)
> at 
> org.apache.hadoop.ozone.container.replication.SimpleContainerDownloader.getContainerDataFromReplicas(SimpleContainerDownloader.java:87)
> at 
> org.apache.hadoop.ozone.container.replication.DownloadAndImportReplicator.replicate(DownloadAndImportReplicator.java:118)
> at 
> org.apache.hadoop.ozone.container.replication.ReplicationSupervisor$TaskRunner.run(ReplicationSupervisor.java:115)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14655) SBN : Namenode crashes if one of The JN is down

2019-07-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14655:

Summary: SBN : Namenode crashes if one of The JN is down  (was: SBN : 
Namenode crashes if one of The jN is down)

> SBN : Namenode crashes if one of The JN is down
> ---
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Priority: Major
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1749) Ozone Client should randomize the list of nodes in pipeline for reads

2019-07-15 Thread Siddharth Wagle (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siddharth Wagle reassigned HDDS-1749:
-

Assignee: Aravindan Vijayan

> Ozone Client should randomize the list of nodes in pipeline for reads
> -
>
> Key: HDDS-1749
> URL: https://issues.apache.org/jira/browse/HDDS-1749
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Affects Versions: 0.4.0
>Reporter: Mukul Kumar Singh
>Assignee: Aravindan Vijayan
>Priority: Major
>
> Currently the list of nodes returned by SCM are static and are returned in 
> the same order to all the clients. Ideally these should be sorted by the 
> network topology and then returned to client.
> However even when network topology in not available, then SCM/client should 
> randomly sort the nodes before choosing the replica's to connect.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885851#comment-16885851
 ] 

Hudson commented on HDDS-1736:
--

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16923 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16923/])
HDDS-1736. Cleanup 2phase old HA code for Key requests. (#1038) (github: rev 
395cb3cfd703320c96855325dadb37a19fbcfc54)
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerRequestHandler.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/audit/OMAction.java
* (edit) 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerHAProtocol.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManager.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
* (edit) hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OmUtils.java
* (edit) hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/ratis/OzoneManagerStateMachine.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMAllocateBlockRequest.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OMMetrics.java
* (edit) 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java


> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14655) SBN : Namenode crashes if one of The jN is down

2019-07-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885850#comment-16885850
 ] 

Ayush Saxena commented on HDFS-14655:
-

For Observer read, The edit log tail period is set to 0, 
(dfs.ha.tail-edits.period)
There are three Journal nodes, the tailing process succeeds if it is able to 
fetch the response from majority of the JN's and then moves out. i.e succeeds 
and returns if it gets response from 2 JN. The thread for the third JN keeps on 
trying 10 times on ConnectException. But since the tailing period is quite low, 
By the time one stuck thread completes. Similar retrying stuck threads, Piles 
up. Leading to OOM

> SBN : Namenode crashes if one of The jN is down
> ---
>
> Key: HDFS-14655
> URL: https://issues.apache.org/jira/browse/HDFS-14655
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Harshakiran Reddy
>Priority: Major
>
> {noformat}
> 2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
> XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 
> 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
> sleepTime=1000 MILLISECONDS) | Client.java:975
> 2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
> while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
> java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Thread.java:717)
>   at 
> java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
>   at 
> java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
>   at 
> com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
>   at 
> com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
>   at 
> org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
>   at 
> org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
>   at 
> org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
> 2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
> java.lang.OutOfMemoryError: unable to create new native thread | 
> ExitUtil.java:210
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14655) SBN : Namenode crashes if one of The jN is down

2019-07-15 Thread Harshakiran Reddy (JIRA)
Harshakiran Reddy created HDFS-14655:


 Summary: SBN : Namenode crashes if one of The jN is down
 Key: HDFS-14655
 URL: https://issues.apache.org/jira/browse/HDFS-14655
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Harshakiran Reddy



{noformat}
2019-07-04 17:35:54,064 | INFO  | Logger channel (from parallel executor) to 
XXX/XXX | Retrying connect to server: XXX/XXX. Already tried 9 
time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, 
sleepTime=1000 MILLISECONDS) | Client.java:975
2019-07-04 17:35:54,087 | FATAL | Edit log tailer | Unknown error encountered 
while tailing edits. Shutting down standby NN. | EditLogTailer.java:474
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:717)
at 
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:957)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1378)
at 
com.google.common.util.concurrent.MoreExecutors$ListeningDecorator.execute(MoreExecutors.java:440)
at 
com.google.common.util.concurrent.AbstractListeningExecutorService.submit(AbstractListeningExecutorService.java:56)
at 
org.apache.hadoop.hdfs.qjournal.client.IPCLoggerChannel.getJournaledEdits(IPCLoggerChannel.java:565)
at 
org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.getJournaledEdits(AsyncLoggerSet.java:272)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectRpcInputStreams(QuorumJournalManager.java:533)
at 
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:508)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:275)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1681)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1714)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:307)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:460)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$300(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:483)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
2019-07-04 17:35:54,112 | INFO  | Edit log tailer | Exiting with status 1: 
java.lang.OutOfMemoryError: unable to create new native thread | 
ExitUtil.java:210
{noformat}




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1805:
-
Description: 
Implement S3 Initiate MPU request to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.

  was:
Implement S3 Bucket write requests to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.


> Implement S3 Initiate MPU request to use Cache and DoubleBuffer
> ---
>
> Key: HDDS-1805
> URL: https://issues.apache.org/jira/browse/HDDS-1805
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Implement S3 Initiate MPU request to use OM Cache, double buffer.
>  
> In this Jira will add the changes to implement S3 bucket operations, and 
> HA/Non-HA will have a different code path, but once all requests are 
> implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1805) Implement S3 Initiate MPU request to use Cache and DoubleBuffer

2019-07-15 Thread Bharat Viswanadham (JIRA)
Bharat Viswanadham created HDDS-1805:


 Summary: Implement S3 Initiate MPU request to use Cache and 
DoubleBuffer
 Key: HDDS-1805
 URL: https://issues.apache.org/jira/browse/HDDS-1805
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: Ozone Manager
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Implement S3 Bucket write requests to use OM Cache, double buffer.

 

In this Jira will add the changes to implement S3 bucket operations, and 
HA/Non-HA will have a different code path, but once all requests are 
implemented will have a single code path.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277209&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277209
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:54
Start Date: 16/Jul/19 04:54
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511663998
 
 
   Thank You @arp7 for the review.
   I will commit this to the trunk. Ran S3 secure acceptance test suite 
locally, tests are passing. Test failures are not related to this patch.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277209)
Time Spent: 1.5h  (was: 1h 20m)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread Bharat Viswanadham (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-1736:
-
   Resolution: Fixed
Fix Version/s: 0.5.0
   Status: Resolved  (was: Patch Available)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277208&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277208
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:52
Start Date: 16/Jul/19 04:52
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #1038: 
HDDS-1736. Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277208)
Time Spent: 1h 20m  (was: 1h 10m)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1492) Generated chunk size name too long.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?focusedWorklogId=277206&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277206
 ]

ASF GitHub Bot logged work on HDDS-1492:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:51
Start Date: 16/Jul/19 04:51
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1084: HDDS-1492. 
Generated chunk size name too long.
URL: https://github.com/apache/hadoop/pull/1084#issuecomment-511663920
 
 
   The updated patch addresses checkstyle issues. I have also verified the name 
of chunk file in dn.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277206)
Time Spent: 40m  (was: 0.5h)

> Generated chunk size name too long.
> ---
>
> Key: HDDS-1492
> URL: https://issues.apache.org/jira/browse/HDDS-1492
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Following exception is seen in SCM logs intermittently. 
> {code}
> java.lang.RuntimeException: file name 
> 'chunks/2a54b2a153f4a9c5da5f44e2c6f97c60_stream_9c6ac565-e2d4-469c-bd5c-47922a35e798_chunk_10.tmp.2.23115'
>  is too long ( > 100 bytes)
> {code}
> We may have to limit the name of the chunk to 100 bytes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277207&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277207
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:51
Start Date: 16/Jul/19 04:51
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511663998
 
 
   Thank You @arp7 for the review.
   I will commit this to the trunk.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277207)
Time Spent: 1h 10m  (was: 1h)

> Cleanup 2phase old HA code for Key requests.
> 
>
> Key: HDDS-1736
> URL: https://issues.apache.org/jira/browse/HDDS-1736
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> HDDS-1638 brought in HA code for Key operations like allocateBlock,createKey 
> etc., 
> Old code changes which are added as part of HDDS-1250 and HDDS-1262 for 
> allocateBlock and openKey.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1492) Generated chunk size name too long.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1492?focusedWorklogId=277204&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277204
 ]

ASF GitHub Bot logged work on HDDS-1492:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:49
Start Date: 16/Jul/19 04:49
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on issue #1084: HDDS-1492. 
Generated chunk size name too long.
URL: https://github.com/apache/hadoop/pull/1084#issuecomment-511663557
 
 
   The updated patch addresses checkstyle issues. I have also verified the name 
of chunk file in dn.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277204)
Time Spent: 0.5h  (was: 20m)

> Generated chunk size name too long.
> ---
>
> Key: HDDS-1492
> URL: https://issues.apache.org/jira/browse/HDDS-1492
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Aravindan Vijayan
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Following exception is seen in SCM logs intermittently. 
> {code}
> java.lang.RuntimeException: file name 
> 'chunks/2a54b2a153f4a9c5da5f44e2c6f97c60_stream_9c6ac565-e2d4-469c-bd5c-47922a35e798_chunk_10.tmp.2.23115'
>  is too long ( > 100 bytes)
> {code}
> We may have to limit the name of the chunk to 100 bytes.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-07-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885841#comment-16885841
 ] 

Hadoop QA commented on HDFS-14652:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
 3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 12s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m  2s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 3 new + 107 unchanged - 1 fixed = 110 total (was 108) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 16s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 58s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:bdbca0e |
| JIRA Issue | HDFS-14652 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974790/HDFS-14652-002.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux e265cdf169da 4.4.0-139-generic #165-Ubuntu SMP Wed Oct 24 
10:58:50 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / f77d54c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27233/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27233/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/27233/testReport/ |
| Max. process+thread count | 1464 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hado

[jira] [Created] (HDDS-1804) TestCloseContainerHandlingByClient#estBlockWrites fails intermittently

2019-07-15 Thread Shashikant Banerjee (JIRA)
Shashikant Banerjee created HDDS-1804:
-

 Summary: TestCloseContainerHandlingByClient#estBlockWrites fails 
intermittently
 Key: HDDS-1804
 URL: https://issues.apache.org/jira/browse/HDDS-1804
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Affects Versions: 0.5.0
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee
 Fix For: 0.5.0


The test fails intermittently as reported here:

[https://builds.apache.org/job/hadoop-multibranch/job/PR-1082/1/testReport/org.apache.hadoop.ozone.client.rpc/TestCloseContainerHandlingByClient/testBlockWrites/]
{code:java}
java.lang.IllegalArgumentException
at 
com.google.common.base.Preconditions.checkArgument(Preconditions.java:72)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClient(XceiverClientManager.java:150)
at 
org.apache.hadoop.hdds.scm.XceiverClientManager.acquireClientForReadData(XceiverClientManager.java:143)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.getChunkInfos(BlockInputStream.java:154)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.initialize(BlockInputStream.java:118)
at 
org.apache.hadoop.hdds.scm.storage.BlockInputStream.read(BlockInputStream.java:222)
at 
org.apache.hadoop.ozone.client.io.KeyInputStream.read(KeyInputStream.java:171)
at 
org.apache.hadoop.ozone.client.io.OzoneInputStream.read(OzoneInputStream.java:47)
at java.io.InputStream.read(InputStream.java:101)
at 
org.apache.hadoop.ozone.container.ContainerTestHelper.validateData(ContainerTestHelper.java:709)
at 
org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.validateData(TestCloseContainerHandlingByClient.java:401)
at 
org.apache.hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient.testBlockWrites(TestCloseContainerHandlingByClient.java:471)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?focusedWorklogId=277186&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277186
 ]

ASF GitHub Bot logged work on HDDS-1803:


Author: ASF GitHub Bot
Created on: 16/Jul/19 04:12
Start Date: 16/Jul/19 04:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1102: HDDS-1803. 
shellcheck.sh does not work on Mac
URL: https://github.com/apache/hadoop/pull/1102#issuecomment-511657290
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 71 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 492 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 843 | branch has no errors when building and testing 
our client artifacts. |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 444 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | shellcheck | 1 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 739 | patch has no errors when building and testing 
our client artifacts. |
   ||| _ Other Tests _ |
   | +1 | unit | 115 | hadoop-hdds in the patch passed. |
   | +1 | unit | 193 | hadoop-ozone in the patch passed. |
   | +1 | asflicense | 44 | The patch does not generate ASF License warnings. |
   | | | 3141 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1102 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 773ae44f6140 4.15.0-48-generic #51-Ubuntu SMP Wed Apr 3 
08:28:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / f77d54c |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/1/testReport/ |
   | Max. process+thread count | 307 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1102/1/console |
   | versions | git=2.7.4 maven=3.3.9 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277186)
Time Spent: 0.5h  (was: 20m)

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1803:

Affects Version/s: 0.4.1

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.4.1
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Doroszlai, Attila updated HDDS-1803:

Status: Patch Available  (was: In Progress)

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-15 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885822#comment-16885822
 ] 

Hadoop QA commented on HDFS-14547:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
42s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.9 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
19s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
49s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
0s{color} | {color:green} branch-2.9 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
14s{color} | {color:green} branch-2.9 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2.9 passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
9s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_212 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 75m 23s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 20s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.qjournal.server.TestJournalNodeRespectsBindHostKeys |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.7 Server=18.09.7 Image:yetus/hadoop:c3439fff6be |
| JIRA Issue | HDFS-14547 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12974786/HDFS-14547-branch-2.9.003.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux bfa19505701b 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2.9 / 330e5c0 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_212 |
| Multi-JDK versions |  /us

[jira] [Work logged] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?focusedWorklogId=277170&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277170
 ]

ASF GitHub Bot logged work on HDDS-1803:


Author: ASF GitHub Bot
Created on: 16/Jul/19 03:20
Start Date: 16/Jul/19 03:20
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on issue #1102: HDDS-1803. 
shellcheck.sh does not work on Mac
URL: https://github.com/apache/hadoop/pull/1102#issuecomment-511648883
 
 
   /label ozone
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277170)
Time Spent: 20m  (was: 10m)

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?focusedWorklogId=277169&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277169
 ]

ASF GitHub Bot logged work on HDDS-1803:


Author: ASF GitHub Bot
Created on: 16/Jul/19 03:18
Start Date: 16/Jul/19 03:18
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1102: HDDS-1803. 
shellcheck.sh does not work on Mac
URL: https://github.com/apache/hadoop/pull/1102
 
 
   ## What changes were proposed in this pull request?
   
* Filter for file permission on Mac.
* Merge two separate `find` calls to avoid overwriting output (and 
eliminate code duplication).
   
   https://issues.apache.org/jira/browse/HDDS-1803
   
   ## How was this patch tested?
   
   ```
   $ hadoop-ozone/dev-support/checks/shellcheck.sh | wc
133 6006065
   
   $ wc target/shell-problems.txt
133 6006065 target/shell-problems.txt
   ```
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277169)
Time Spent: 10m
Remaining Estimate: 0h

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-1803:
-
Labels: pull-request-available  (was: )

> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>  Labels: pull-request-available
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work started] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HDDS-1803 started by Doroszlai, Attila.
---
> shellcheck.sh does not work on Mac
> --
>
> Key: HDDS-1803
> URL: https://issues.apache.org/jira/browse/HDDS-1803
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Doroszlai, Attila
>Assignee: Doroszlai, Attila
>Priority: Minor
>
> # {{shellcheck.sh}} does not work on Mac
> {code}
> find: -executable: unknown primary or operator
> {code}
> # {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
> {{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1736) Cleanup 2phase old HA code for Key requests.

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1736?focusedWorklogId=277168&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277168
 ]

ASF GitHub Bot logged work on HDDS-1736:


Author: ASF GitHub Bot
Created on: 16/Jul/19 03:11
Start Date: 16/Jul/19 03:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1038: HDDS-1736. 
Cleanup 2phase old HA code for Key requests.
URL: https://github.com/apache/hadoop/pull/1038#issuecomment-511647235
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 40 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -1 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 49 | Maven dependency ordering for branch |
   | +1 | mvninstall | 499 | trunk passed |
   | +1 | compile | 267 | trunk passed |
   | +1 | checkstyle | 79 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 883 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 167 | trunk passed |
   | 0 | spotbugs | 312 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 507 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 26 | Maven dependency ordering for patch |
   | +1 | mvninstall | 444 | the patch passed |
   | +1 | compile | 273 | the patch passed |
   | +1 | cc | 273 | the patch passed |
   | +1 | javac | 273 | the patch passed |
   | +1 | checkstyle | 85 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 677 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 166 | the patch passed |
   | +1 | findbugs | 531 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 290 | hadoop-hdds in the patch failed. |
   | -1 | unit | 1641 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 6859 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdds.scm.container.TestReplicationManager |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestCommitWatcher |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1038 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux 9c00d285b1db 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ef66e49 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/testReport/ |
   | Max. process+thread count | 5341 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/ozone-manager U: 
hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1038/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklo

[jira] [Updated] (HDFS-14652) HealthMonitor connection retry times should be configurable

2019-07-15 Thread Chen Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Zhang updated HDFS-14652:
--
Attachment: HDFS-14652-002.patch

> HealthMonitor connection retry times should be configurable
> ---
>
> Key: HDFS-14652
> URL: https://issues.apache.org/jira/browse/HDFS-14652
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Chen Zhang
>Assignee: Chen Zhang
>Priority: Major
> Attachments: HDFS-14652-001.patch, HDFS-14652-002.patch
>
>
> On our production HDFS cluster, some client's burst requests cause the tcp 
> kernel queue full on NameNode's host,  since the configuration value of 
> "net.ipv4.tcp_syn_retries" in our environment is 1, so after 3 seconds, the 
> ZooKeeper Healthmonitor got an connection error like this:
> {code:java}
> WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to 
> monitor health of NameNode at nn_host_name/ip_address:port: Call From 
> zkfc_host_name/ip to nn_host_name:port failed on connection exception: 
> java.net.ConnectException: Connection timed out; For more details see: 
> http://wiki.apache.org/hadoop/ConnectionRefused
> {code}
> This error caused a failover and affects the availability of that cluster, we 
> fixed this issue by enlarge the kernel parameter net.ipv4.tcp_syn_retries to 6
> But during working on this issue, we found that the connection retry 
> time(ipc.client.connect.max.retries) of health-monitor is hard coded as 1, I 
> think it should be configurable, then if we don't want the health-monitor so 
> sensitive, we can change it's behavior by change this configuration



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1803) shellcheck.sh does not work on Mac

2019-07-15 Thread Doroszlai, Attila (JIRA)
Doroszlai, Attila created HDDS-1803:
---

 Summary: shellcheck.sh does not work on Mac
 Key: HDDS-1803
 URL: https://issues.apache.org/jira/browse/HDDS-1803
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Doroszlai, Attila
Assignee: Doroszlai, Attila


# {{shellcheck.sh}} does not work on Mac
{code}
find: -executable: unknown primary or operator
{code}
# {{$OUTPUT_FILE}} only contains problems from {{hadoop-ozone}}, not from 
{{hadoop-hdds}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14642) processMisReplicatedBlocks does not return correct processed count

2019-07-15 Thread Ayush Saxena (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885798#comment-16885798
 ] 

Ayush Saxena commented on HDFS-14642:
-

Committed to trunk.
Thanks Everyone!!!

> processMisReplicatedBlocks does not return correct processed count
> --
>
> Key: HDFS-14642
> URL: https://issues.apache.org/jira/browse/HDFS-14642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14642.001.patch
>
>
> HDFS-14053 introduced a method "processMisReplicatedBlocks" to the 
> blockManager, and it is used by fsck to schedule mis-replicated blocks for 
> replication.
> The method should return a the number of blocks it processed, but it always 
> returns zero as "processed" is never incremented in the method.
> It should also drop and re-take the write lock every "numBlocksPerIteration" 
> but as processed is never incremented, it will never drop and re-take the 
> write lock, giving potential for holding the write lock for a long time.
> {code:java}
> public int processMisReplicatedBlocks(List blocks) {
>   int processed = 0;
>   Iterator iter = blocks.iterator();
>   try {
> while (isPopulatingReplQueues() && namesystem.isRunning()
> && !Thread.currentThread().isInterrupted()
> && iter.hasNext()) {
>   int limit = processed + numBlocksPerIteration;
>   namesystem.writeLockInterruptibly();
>   try {
> while (iter.hasNext() && processed < limit) {
>   BlockInfo blk = iter.next();
>   MisReplicationResult r = processMisReplicatedBlock(blk);
>   LOG.debug("BLOCK* processMisReplicatedBlocks: " +
>   "Re-scanned block {}, result is {}", blk, r);
> }
>   } finally {
> namesystem.writeUnlock();
>   }
> }
>   } catch (InterruptedException ex) {
> LOG.info("Caught InterruptedException while scheduling replication work" +
> " for mis-replicated blocks");
> Thread.currentThread().interrupt();
>   }
>   return processed;
> }{code}
> Due to this, fsck causes a warning to be logged in the NN for every 
> mis-replicated file it schedules replication for, as it checks the processed 
> count:
> {code:java}
> 2019-07-10 15:46:14,790 WARN namenode.NameNode: Fsck: Block manager is able 
> to process only 0 mis-replicated blocks (Total count : 1 ) for path /...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14642) processMisReplicatedBlocks does not return correct processed count

2019-07-15 Thread Ayush Saxena (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14642:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

> processMisReplicatedBlocks does not return correct processed count
> --
>
> Key: HDFS-14642
> URL: https://issues.apache.org/jira/browse/HDFS-14642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14642.001.patch
>
>
> HDFS-14053 introduced a method "processMisReplicatedBlocks" to the 
> blockManager, and it is used by fsck to schedule mis-replicated blocks for 
> replication.
> The method should return a the number of blocks it processed, but it always 
> returns zero as "processed" is never incremented in the method.
> It should also drop and re-take the write lock every "numBlocksPerIteration" 
> but as processed is never incremented, it will never drop and re-take the 
> write lock, giving potential for holding the write lock for a long time.
> {code:java}
> public int processMisReplicatedBlocks(List blocks) {
>   int processed = 0;
>   Iterator iter = blocks.iterator();
>   try {
> while (isPopulatingReplQueues() && namesystem.isRunning()
> && !Thread.currentThread().isInterrupted()
> && iter.hasNext()) {
>   int limit = processed + numBlocksPerIteration;
>   namesystem.writeLockInterruptibly();
>   try {
> while (iter.hasNext() && processed < limit) {
>   BlockInfo blk = iter.next();
>   MisReplicationResult r = processMisReplicatedBlock(blk);
>   LOG.debug("BLOCK* processMisReplicatedBlocks: " +
>   "Re-scanned block {}, result is {}", blk, r);
> }
>   } finally {
> namesystem.writeUnlock();
>   }
> }
>   } catch (InterruptedException ex) {
> LOG.info("Caught InterruptedException while scheduling replication work" +
> " for mis-replicated blocks");
> Thread.currentThread().interrupt();
>   }
>   return processed;
> }{code}
> Due to this, fsck causes a warning to be logged in the NN for every 
> mis-replicated file it schedules replication for, as it checks the processed 
> count:
> {code:java}
> 2019-07-10 15:46:14,790 WARN namenode.NameNode: Fsck: Block manager is able 
> to process only 0 mis-replicated blocks (Total count : 1 ) for path /...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14642) processMisReplicatedBlocks does not return correct processed count

2019-07-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885793#comment-16885793
 ] 

Hudson commented on HDFS-14642:
---

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #16922 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/16922/])
HDFS-14642. processMisReplicatedBlocks does not return correct processed 
(ayushsaxena: rev f77d54c24343e6ca7c438d9db431cef14c3ae77b)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestBlocksWithNotEnoughRacks.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java


> processMisReplicatedBlocks does not return correct processed count
> --
>
> Key: HDFS-14642
> URL: https://issues.apache.org/jira/browse/HDFS-14642
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.2.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-14642.001.patch
>
>
> HDFS-14053 introduced a method "processMisReplicatedBlocks" to the 
> blockManager, and it is used by fsck to schedule mis-replicated blocks for 
> replication.
> The method should return a the number of blocks it processed, but it always 
> returns zero as "processed" is never incremented in the method.
> It should also drop and re-take the write lock every "numBlocksPerIteration" 
> but as processed is never incremented, it will never drop and re-take the 
> write lock, giving potential for holding the write lock for a long time.
> {code:java}
> public int processMisReplicatedBlocks(List blocks) {
>   int processed = 0;
>   Iterator iter = blocks.iterator();
>   try {
> while (isPopulatingReplQueues() && namesystem.isRunning()
> && !Thread.currentThread().isInterrupted()
> && iter.hasNext()) {
>   int limit = processed + numBlocksPerIteration;
>   namesystem.writeLockInterruptibly();
>   try {
> while (iter.hasNext() && processed < limit) {
>   BlockInfo blk = iter.next();
>   MisReplicationResult r = processMisReplicatedBlock(blk);
>   LOG.debug("BLOCK* processMisReplicatedBlocks: " +
>   "Re-scanned block {}, result is {}", blk, r);
> }
>   } finally {
> namesystem.writeUnlock();
>   }
> }
>   } catch (InterruptedException ex) {
> LOG.info("Caught InterruptedException while scheduling replication work" +
> " for mis-replicated blocks");
> Thread.currentThread().interrupt();
>   }
>   return processed;
> }{code}
> Due to this, fsck causes a warning to be logged in the NN for every 
> mis-replicated file it schedules replication for, as it checks the processed 
> count:
> {code:java}
> 2019-07-10 15:46:14,790 WARN namenode.NameNode: Fsck: Block manager is able 
> to process only 0 mis-replicated blocks (Total count : 1 ) for path /...{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277156&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277156
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:49
Start Date: 16/Jul/19 02:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709384
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2279,7 +2332,42 @@ public void testNativeAclsForKey() throws Exception {
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+// Validates access acls.
+validateOzoneAccessAcl(ozObj);
+
+// Check default acls inherited from bucket.
+OzoneObj buckObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setKeyName(key1)
+.setResType(OzoneObj.ResourceType.BUCKET)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+
+validateDefaultAcls(buckObj, ozObj, null, bucket);
+
+// Check default acls inherited from prefix.
+OzoneObj prefixObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setKeyName(key1)
+.setPrefixName("dir1/")
+.setResType(OzoneObj.ResourceType.PREFIX)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+store.setAcl(prefixObj, getAclList(new OzoneConfiguration()));
+// Prefix should inherit DEFAULT acl from bucket.
+
+List acls = store.getAcl(prefixObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls),
+acls.contains(inheritedUserAcl));
+assertTrue("Current acls:" + StringUtils.join(",", acls),
+acls.contains(inheritedGroupAcl));
+// Remove inherited acls from prefix.
+assertTrue(store.removeAcl(prefixObj, inheritedUserAcl));
+assertTrue(store.removeAcl(prefixObj, inheritedGroupAcl));
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277156)
Time Spent: 14h  (was: 13h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 14h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277146&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277146
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709405
 
 

 ##
 File path: 
hadoop-ozone/objectstore-service/src/main/java/org/apache/hadoop/ozone/web/storage/DistributedStorageHandler.java
 ##
 @@ -71,6 +70,8 @@
 import java.util.Objects;
 import java.util.concurrent.TimeUnit;
 
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277146)
Time Spent: 12h 20m  (was: 12h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277150&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277150
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709426
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,44 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+
+// Inherit DEFAULT acls from bucket only if DEFAULT acls for 
+// prefix are not set.
+if (!prefixParentFound && bucketInfo != null) {
+  acls.addAll(bucketInfo.getAcls().stream().filter(a -> a.getAclScope()
+  .equals(OzoneAcl.AclScope.DEFAULT))
+  .map(OzoneAcl::toProtobufWithAccessType)
+  .collect(Collectors.toList()));
+}
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277150)
Time Spent: 13h  (was: 12h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277152&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277152
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709437
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -472,7 +481,8 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 if (keyInfo == null) {
   // the key does not exist, create a new object, the new blocks are the
   // version 0
-  keyInfo = createKeyInfo(args, locations, factor, type, size, encInfo);
+  keyInfo = createKeyInfo(args, locations, factor, type, size, 
+  encInfo, bucketInfo);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277152)
Time Spent: 13h 20m  (was: 13h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277149&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277149
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709421
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -983,6 +1019,44 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
 }
   }
 
+  private List getAclsForKey(OmKeyArgs keyArgs, 
+  OmVolumeArgs volArgs, OmBucketInfo bucketInfo) {
+List acls = new ArrayList<>(keyArgs.getAcls().size());
+
+keyArgs.getAcls().stream().map(OzoneAcl::toProtobuf).
+collect(Collectors.toList());
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277149)
Time Spent: 12h 50m  (was: 12h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277151&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277151
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709432
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -455,8 +463,9 @@ public OpenKeySession openKey(OmKeyArgs args) throws 
IOException {
 
 FileEncryptionInfo encInfo;
 metadataManager.getLock().acquireLock(BUCKET_LOCK, volumeName, bucketName);
+OmBucketInfo bucketInfo;
 try {
-  OmBucketInfo bucketInfo = getBucketInfo(volumeName, bucketName);
+  bucketInfo = getBucketInfo(volumeName, bucketName);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277151)
Time Spent: 13h 10m  (was: 13h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277145&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277145
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709403
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -256,9 +258,10 @@ public void testCheckAccessForBucket() throws Exception {
 
   @Test
   public void testCheckAccessForKey() throws Exception {
-OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl);
+OzoneAcl userAcl = new OzoneAcl(USER, ugi.getUserName(), parentDirUserAcl, 
+ACCESS);
 OzoneAcl groupAcl = new OzoneAcl(GROUP, ugi.getGroups().size() > 0 ?
-ugi.getGroups().get(0) : "", parentDirGroupAcl);
+ugi.getGroups().get(0) : "", parentDirGroupAcl, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277145)
Time Spent: 12h 10m  (was: 12h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277153&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277153
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709445
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -617,10 +629,35 @@ private OmKeyInfo createKeyInfo(OmKeyArgs keyArgs,
 .setReplicationType(type)
 .setReplicationFactor(factor)
 .setFileEncryptionInfo(encInfo);
+List acls = new ArrayList<>();
 if(keyArgs.getAcls() != null) {
-  builder.setAcls(keyArgs.getAcls().stream().map(a ->
+  acls.addAll(keyArgs.getAcls().stream().map(a ->
   OzoneAcl.toProtobuf(a)).collect(Collectors.toList()));
 }
+
+// Inherit DEFAULT acls from prefix.
+boolean prefixParentFound = false;
+if(prefixManager != null) {
+  List prefixList = prefixManager.getLongestPrefixPath(
+  OZONE_URI_DELIMITER +
+  keyArgs.getVolumeName() + OZONE_URI_DELIMITER +
+  keyArgs.getBucketName() + OZONE_URI_DELIMITER +
+  keyArgs.getKeyName());
+
+  if(prefixList.size() > 0) {
+// Add all acls from direct parent to key.
+OmPrefixInfo prefixInfo = prefixList.get(prefixList.size() - 1);
+if(prefixInfo  != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(prefixInfo.getAcls()));
+  prefixParentFound = true;
+}
+  }
+}
+if(!prefixParentFound && omBucketInfo != null) {
+  acls.addAll(OzoneUtils.getDefaultAclsProto(omBucketInfo.getAcls()));
+}
+builder.setAcls(acls);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277153)
Time Spent: 13.5h  (was: 13h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277155&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277155
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1101: HDDS-1544. 
Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#issuecomment-511642985
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 8 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 41 | Maven dependency ordering for branch |
   | +1 | mvninstall | 492 | trunk passed |
   | +1 | compile | 263 | trunk passed |
   | +1 | checkstyle | 76 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 868 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 143 | trunk passed |
   | 0 | spotbugs | 314 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 504 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 25 | Maven dependency ordering for patch |
   | +1 | mvninstall | 456 | the patch passed |
   | +1 | compile | 367 | the patch passed |
   | +1 | cc | 367 | the patch passed |
   | +1 | javac | 367 | the patch passed |
   | -0 | checkstyle | 37 | hadoop-ozone: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) |
   | +1 | mvnsite | 0 | the patch passed |
   | -1 | whitespace | 0 | The patch has 39 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply |
   | +1 | shadedclient | 639 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 163 | the patch passed |
   | +1 | findbugs | 526 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 308 | hadoop-hdds in the patch passed. |
   | -1 | unit | 2043 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 40 | The patch does not generate ASF License warnings. |
   | | | 7238 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ozone.TestStorageContainerManager |
   |   | hadoop.ozone.client.rpc.TestBCSID |
   |   | hadoop.ozone.client.rpc.TestOzoneAtRestEncryption |
   |   | hadoop.ozone.client.rpc.TestFailureHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStream |
   |   | hadoop.ozone.client.rpc.TestBlockOutputStreamWithFailures |
   |   | hadoop.ozone.web.client.TestKeys |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClientWithRatis |
   |   | hadoop.ozone.client.rpc.TestOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestOzoneClientRetriesOnException |
   |   | hadoop.ozone.client.rpc.TestCloseContainerHandlingByClient |
   |   | hadoop.ozone.client.rpc.TestSecureOzoneRpcClient |
   |   | hadoop.ozone.client.rpc.TestWatchForCommit |
   |   | hadoop.ozone.TestContainerOperations |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1101 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle cc |
   | uname | Linux f5e6b6b990c9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1411513 |
   | Default Java | 1.8.0_212 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/diff-checkstyle-hadoop-ozone.txt
 |
   | whitespace | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/whitespace-eol.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/testReport/ |
   | Max. process+thread count | 4826 (vs. ulimit of 5500) |
   | modules | C: hadoop-ozone/common hadoop-ozone/client 
hadoop-ozone/ozone-manager hadoop-ozone/objectstore-service hadoop-ozone/dist 
hadoop-ozone/integration-test U: hadoop-ozone |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1101/1/console |
   | versions | git=2.7.4 maven=3.3

[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277142&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277142
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709388
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2357,28 +2467,28 @@ public void testNativeAclsForPrefix() throws Exception 
{
 ACLType userRights = aclConfig.getUserDefaultRights();
 ACLType groupRights = aclConfig.getGroupDefaultRights();
 
-listOfAcls.add(new OzoneAcl(ACLIdentityType.USER,
-ugi.getUserName(), userRights));
+listOfAcls.add(new OzoneAcl(USER,
+ugi.getUserName(), userRights, ACCESS));
 //Group ACLs of the User
 List userGroups = Arrays.asList(ugi.getGroupNames());
 userGroups.stream().forEach((group) -> listOfAcls.add(
-new OzoneAcl(ACLIdentityType.GROUP, group, groupRights)));
+new OzoneAcl(GROUP, group, groupRights, ACCESS)));
 return listOfAcls;
   }
 
   /**
* Helper function to validate ozone Acl for given object.
* @param ozObj
* */
-  private void validateOzoneAcl(OzoneObj ozObj) throws IOException {
+  private void validateOzoneAccessAcl(OzoneObj ozObj) throws IOException {
 // Get acls for volume.
 List expectedAcls = getAclList(new OzoneConfiguration());
 
 // Case:1 Add new acl permission to existing acl.
 if(expectedAcls.size()>0) {
   OzoneAcl oldAcl = expectedAcls.get(0);
   OzoneAcl newAcl = new OzoneAcl(oldAcl.getType(), oldAcl.getName(),
-  ACLType.READ_ACL);
+  ACLType.READ_ACL, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277142)
Time Spent: 11h 40m  (was: 11.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277144&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277144
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709396
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/security/acl/TestOzoneNativeAuthorizer.java
 ##
 @@ -57,6 +57,7 @@
 import java.util.stream.Collectors;
 
 import static org.apache.hadoop.hdds.HddsConfigKeys.OZONE_METADATA_DIRS;
+import static org.apache.hadoop.ozone.OzoneAcl.AclScope.ACCESS;
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277144)
Time Spent: 12h  (was: 11h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277148&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277148
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709417
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
 ##
 @@ -957,8 +994,7 @@ public OmMultipartInfo 
applyInitiateMultipartUpload(OmKeyArgs keyArgs,
   .setReplicationFactor(keyArgs.getFactor())
   .setOmKeyLocationInfos(Collections.singletonList(
   new OmKeyLocationInfoGroup(0, locations)))
-  .setAcls(keyArgs.getAcls().stream().map(a ->
-  OzoneAcl.toProtobuf(a)).collect(Collectors.toList()))
+  .setAcls(getAclsForKey(keyArgs, null, bucketInfo))
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277148)
Time Spent: 12h 40m  (was: 12.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277154&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277154
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709450
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -429,18 +430,15 @@ private OzoneManager(OzoneConfiguration conf) throws 
IOException,
 } else {
   accessAuthorizer = null;
 }
-ozAdmins = conf.getTrimmedStringCollection(OzoneConfigKeys
-.OZONE_ADMINISTRATORS);
+ozAdmins = conf.getTrimmedStringCollection(OZONE_ADMINISTRATORS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277154)
Time Spent: 13h 40m  (was: 13.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 13h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277143&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277143
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709392
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2433,8 +2543,10 @@ private void validateOzoneAcl(OzoneObj ozObj) throws 
IOException {
 expectedAcls.forEach(a -> assertTrue(finalNewAcls.contains(a)));
 
 // Reset acl's.
-OzoneAcl ua = new OzoneAcl(ACLIdentityType.USER, "userx", 
ACLType.READ_ACL);
-OzoneAcl ug = new OzoneAcl(ACLIdentityType.GROUP, "userx", ACLType.ALL);
+OzoneAcl ua = new OzoneAcl(USER, "userx", 
+ACLType.READ_ACL, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277143)
Time Spent: 11h 50m  (was: 11h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277147&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277147
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:47
Start Date: 16/Jul/19 02:47
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709413
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/BucketManagerImpl.java
 ##
 @@ -165,10 +169,15 @@ public void createBucket(OmBucketInfo bucketInfo) throws 
IOException {
 .setVersion(CryptoProtocolVersion.ENCRYPTION_ZONES)
 .setSuite(CipherSuite.convert(metadata.getCipher()));
   }
+  List acls = new ArrayList<>();
+  acls.addAll(bucketInfo.getAcls());
+  volumeArgs.getAclMap().getDefaultAclList().forEach(
+  a -> acls.add(OzoneAcl.fromProtobufWithAccessType(a)));
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277147)
Time Spent: 12.5h  (was: 12h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 12.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277130&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277130
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709318
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
+   * @param protoAcl
+   * @return OzoneAcl
+   * */
+  public static OzoneAcl fromProtobufWithAccessType(OzoneAclInfo protoAcl) {
+BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
+return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
+protoAcl.getName(), aclRights, AclScope.ACCESS);
+  }
+
+  /**
+   * Helper function to convert an {@link OzoneAcl} to proto message of type
+   * {@link OzoneAclInfo} with acl scope of type ACCESS.
+   *
+   * @param acl
+   * @return OzoneAclInfo
+   * */
+  public static OzoneAclInfo toProtobufWithAccessType(OzoneAcl acl) {
+OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
+.setName(acl.getName())
+.setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.ACCESS)
+.setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
+return builder.build();
   }
 
+  public AclScope getAclScope() {
+return aclScope;
+  }
+  
   @Override
   public String toString() {
-return type + ":" + name + ":" + ACLType.getACLString(aclBitSet);
+return type + ":" + name + ":" + ACLType.getACLString(aclBitSet) 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277130)
Time Spent: 9h 40m  (was: 9.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277124&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277124
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709283
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -120,16 +129,19 @@ public OzoneAcl(ACLIdentityType type, String name, 
BitSet acls) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
-   * Parses an ACL string and returns the ACL object.
+   * Parses an ACL string and returns the ACL object. If acl scope is not 
+   * passed in input string then scope is set to ACCESS.
*
* @param acl - Acl String , Ex. user:anu:rw
*
* @return - Ozone ACLs
*/
-  public static OzoneAcl parseAcl(String acl) throws IllegalArgumentException {
+  public static OzoneAcl parseAcl(String acl) 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277124)
Time Spent: 8h 40m  (was: 8.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277126&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277126
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709298
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
+// Check if acl string contains scope info.
+if(parts[2].matches(ACL_SCOPE_REGEX)) {
+  int indexOfOpenBracket = parts[2].indexOf("[");
+  bits = parts[2].substring(0, indexOfOpenBracket);
+  aclScope = AclScope.valueOf(parts[2].substring(indexOfOpenBracket + 1,
+  parts[2].indexOf("]")));
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277126)
Time Spent: 9h  (was: 8h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277133&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277133
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709339
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277133)
Time Spent: 10h 10m  (was: 10h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277127&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277127
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709302
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277127)
Time Spent: 9h 10m  (was: 9h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277129&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277129
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709315
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
+   * @param protoAcl
+   * @return OzoneAcl
+   * */
+  public static OzoneAcl fromProtobufWithAccessType(OzoneAclInfo protoAcl) {
+BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
+return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
+protoAcl.getName(), aclRights, AclScope.ACCESS);
+  }
+
+  /**
+   * Helper function to convert an {@link OzoneAcl} to proto message of type
+   * {@link OzoneAclInfo} with acl scope of type ACCESS.
+   *
+   * @param acl
+   * @return OzoneAclInfo
+   * */
+  public static OzoneAclInfo toProtobufWithAccessType(OzoneAcl acl) {
+OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
+.setName(acl.getName())
+.setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.ACCESS)
+.setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
+return builder.build();
   }
 
+  public AclScope getAclScope() {
+return aclScope;
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277129)
Time Spent: 9.5h  (was: 9h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277140&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277140
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709373
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2252,15 +2266,54 @@ public void testNativeAclsForBucket() throws Exception 
{
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+validateOzoneAccessAcl(ozObj);
+
+OzoneObj volObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setResType(OzoneObj.ResourceType.VOLUME)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+validateDefaultAcls(volObj, ozObj, volume, null);
+  }
+
+  private void validateDefaultAcls(OzoneObj parentObj, OzoneObj childObj, 
+  OzoneVolume volume,  OzoneBucket bucket) throws Exception {
+assertTrue(store.addAcl(parentObj, defaultUserAcl));
+assertTrue(store.addAcl(parentObj, defaultGroupAcl));
+if (volume != null) {
+  volume.deleteBucket(childObj.getBucketName());
+  volume.createBucket(childObj.getBucketName());
+} else {
+  if (childObj.getResourceType().equals(OzoneObj.ResourceType.KEY)) {
+bucket.deleteKey(childObj.getKeyName());
+writeKey(childObj.getKeyName(), bucket);
+  } else {
+store.setAcl(childObj, getAclList(new OzoneConfiguration()));
+  }
+}
+List acls = store.getAcl(parentObj);
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultUserAcl));
+assertTrue("Current acls:" + StringUtils.join(",", acls) +
+" inheritedUserAcl:" + inheritedUserAcl,
+acls.contains(defaultGroupAcl));
+
+acls = store.getAcl(childObj);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277140)
Time Spent: 11h 20m  (was: 11h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277123&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277123
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709293
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
+// Default acl scope is ACCESS.
+AclScope aclScope = AclScope.ACCESS;
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277123)
Time Spent: 8.5h  (was: 8h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277137&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277137
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709360
 
 

 ##
 File path: hadoop-ozone/common/src/main/proto/OzoneManagerProtocol.proto
 ##
 @@ -507,9 +507,15 @@ message OzoneAclInfo {
 CLIENT_IP = 5;
 }
 
+enum OzoneAclScope {
+  ACCESS = 0;
+  DEFAULT = 1;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277137)
Time Spent: 10h 50m  (was: 10h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277132&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277132
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709335
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -116,9 +136,14 @@ public void setAcls(List acls) throws 
OMException {
   // Add a new acl to the map
   public void removeAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.remove(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277132)
Time Spent: 10h  (was: 9h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277134&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277134
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709343
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277134)
Time Spent: 10h 20m  (was: 10h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277131&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277131
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709327
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmOzoneAclMap.java
 ##
 @@ -49,53 +52,71 @@
 @SuppressWarnings("ProtocolBufferOrdinal")
 public class OmOzoneAclMap {
   // per Acl Type user:rights map
-  private ArrayList> aclMaps;
+  private ArrayList> accessAclMap;
+  private List defaultAclList;
 
   OmOzoneAclMap() {
-aclMaps = new ArrayList<>();
+accessAclMap = new ArrayList<>();
+defaultAclList = new ArrayList<>();
 for (OzoneAclType aclType : OzoneAclType.values()) {
-  aclMaps.add(aclType.ordinal(), new HashMap<>());
+  accessAclMap.add(aclType.ordinal(), new HashMap<>());
 }
   }
 
-  private Map getMap(OzoneAclType type) {
-return aclMaps.get(type.ordinal());
+  private Map getAccessAclMap(OzoneAclType type) {
+return accessAclMap.get(type.ordinal());
   }
 
   // For a given acl type and user, get the stored acl
   private BitSet getAcl(OzoneAclType type, String user) {
-return getMap(type).get(user);
+return getAccessAclMap(type).get(user);
   }
 
   public List getAcl() {
 List acls = new ArrayList<>();
 
+acls.addAll(getAccessAcls());
+acls.addAll(defaultAclList.stream().map(a -> 
+OzoneAcl.fromProtobuf(a)).collect(Collectors.toList()));
+return acls;
+  }
+
+  private Collection getAccessAcls() {
+List acls = new ArrayList<>();
 for (OzoneAclType type : OzoneAclType.values()) {
-  aclMaps.get(type.ordinal()).entrySet().stream().
+  accessAclMap.get(type.ordinal()).entrySet().stream().
   forEach(entry -> acls.add(new OzoneAcl(ACLIdentityType.
-  valueOf(type.name()), entry.getKey(), entry.getValue(;
+  valueOf(type.name()), entry.getKey(), entry.getValue(),
+  OzoneAcl.AclScope.ACCESS)));
 }
+
 return acls;
   }
 
   // Add a new acl to the map
   public void addAcl(OzoneAcl acl) throws OMException {
 Objects.requireNonNull(acl, "Acl should not be null.");
+if (acl.getAclScope().equals(OzoneAcl.AclScope.DEFAULT)) {
+  defaultAclList.add(OzoneAcl.toProtobuf(acl));
+  return;
+}
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277131)
Time Spent: 9h 50m  (was: 9h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277121&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277121
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709275
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -83,16 +89,19 @@ public OzoneAcl(ACLIdentityType type, String name, ACLType 
acl) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
* Constructor for OzoneAcl.
*
-   * @param type - Type
-   * @param name - Name of user
-   * @param acls - Rights
+   * @param type   - Type
+   * @param name   - Name of user
+   * @param acls   - Rights
+   * @param scope  - AclScope
*/
-  public OzoneAcl(ACLIdentityType type, String name, BitSet acls) {
+  public OzoneAcl(ACLIdentityType type, String name, BitSet acls, 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277121)
Time Spent: 8h 10m  (was: 8h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277135&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277135
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709348
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277135)
Time Spent: 10.5h  (was: 10h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277125&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277125
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709280
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -120,16 +129,19 @@ public OzoneAcl(ACLIdentityType type, String name, 
BitSet acls) {
 && (name.length() == 0)) {
   throw new IllegalArgumentException("User or group name is required");
 }
+aclScope = scope;
   }
 
   /**
-   * Parses an ACL string and returns the ACL object.
+   * Parses an ACL string and returns the ACL object. If acl scope is not 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277125)
Time Spent: 8h 50m  (was: 8h 40m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 50m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277141&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277141
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709378
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2279,7 +2332,42 @@ public void testNativeAclsForKey() throws Exception {
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+// Validates access acls.
+validateOzoneAccessAcl(ozObj);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277141)
Time Spent: 11.5h  (was: 11h 20m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277139&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277139
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709367
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -2252,15 +2266,54 @@ public void testNativeAclsForBucket() throws Exception 
{
 .setStoreType(OzoneObj.StoreType.OZONE)
 .build();
 
-validateOzoneAcl(ozObj);
+validateOzoneAccessAcl(ozObj);
+
+OzoneObj volObj = new OzoneObjInfo.Builder()
+.setVolumeName(volumeName)
+.setResType(OzoneObj.ResourceType.VOLUME)
+.setStoreType(OzoneObj.StoreType.OZONE)
+.build();
+validateDefaultAcls(volObj, ozObj, volume, null);
+  }
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277139)
Time Spent: 11h 10m  (was: 11h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277128&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277128
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709308
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -178,19 +204,55 @@ public static OzoneAclInfo toProtobuf(OzoneAcl acl) {
 OzoneAclInfo.Builder builder = OzoneAclInfo.newBuilder()
 .setName(acl.getName())
 .setType(OzoneAclType.valueOf(acl.getType().name()))
+.setAclScope(OzoneAclScope.valueOf(acl.getAclScope().name()))
 .setRights(ByteString.copyFrom(acl.getAclBitSet().toByteArray()));
 return builder.build();
   }
 
   public static OzoneAcl fromProtobuf(OzoneAclInfo protoAcl) {
 BitSet aclRights = BitSet.valueOf(protoAcl.getRights().toByteArray());
 return new OzoneAcl(ACLIdentityType.valueOf(protoAcl.getType().name()),
-protoAcl.getName(), aclRights);
+protoAcl.getName(), aclRights, 
+AclScope.valueOf(protoAcl.getAclScope().name()));
+  }
+
+  /**
+   * Helper function to convert a proto message of type {@link OzoneAclInfo}
+   * to {@link OzoneAcl} with acl scope of type ACCESS.
+   * 
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277128)
Time Spent: 9h 20m  (was: 9h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277122&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277122
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709290
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/OzoneAcl.java
 ##
 @@ -141,13 +153,27 @@ public static OzoneAcl parseAcl(String acl) throws 
IllegalArgumentException {
 ACLIdentityType aclType = ACLIdentityType.valueOf(parts[0].toUpperCase());
 BitSet acls = new BitSet(ACLType.getNoOfAcls());
 
-for (char ch : parts[2].toCharArray()) {
+String bits = parts[2];
+
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277122)
Time Spent: 8h 20m  (was: 8h 10m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 8h 20m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277138&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277138
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709361
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
 ##
 @@ -137,6 +142,15 @@
   private static OzoneManager ozoneManager;
   private static StorageContainerLocationProtocolClientSideTranslatorPB
   storageContainerLocationClient;
+  private static String remoteUserName = "remoteUser";
+  private static OzoneAcl defaultUserAcl = new OzoneAcl(USER, remoteUserName,
+  READ, DEFAULT);
+  private static OzoneAcl defaultGroupAcl = new OzoneAcl(GROUP, remoteUserName,
+  READ, DEFAULT);
+  private static OzoneAcl inheritedUserAcl = new OzoneAcl(USER, remoteUserName,
+  READ, ACCESS);
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277138)
Time Spent: 11h  (was: 10h 50m)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1544) Support default Acls for volume, bucket, keys and prefix

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1544?focusedWorklogId=277136&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277136
 ]

ASF GitHub Bot logged work on HDDS-1544:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:46
Start Date: 16/Jul/19 02:46
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #1101: 
HDDS-1544. Support default Acls for volume, bucket, keys and prefix. …
URL: https://github.com/apache/hadoop/pull/1101#discussion_r303709353
 
 

 ##
 File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/web/utils/OzoneUtils.java
 ##
 @@ -365,4 +369,30 @@ public static boolean checkIfAclBitIsSet(ACLType acl, 
BitSet bitset) {
 || bitset.get(ALL.ordinal()))
 && !bitset.get(NONE.ordinal()));
   }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   * 
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAclsProto(
+  List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.map(OzoneAcl::toProtobufWithAccessType).collect(Collectors.toList());
+  }
+
+  /**
+   * Helper function to find and return all DEFAULT acls in input list with
+   * scope changed to ACCESS.
+   * @param acls
+   *
+   * @return list of default Acls.
+   * */
+  public static Collection getDefaultAcls(List acls) {
+return acls.stream().filter(a -> a.getAclScope() == DEFAULT)
+.collect(Collectors.toList());
+  }
+  
 
 Review comment:
   whitespace:end of line
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277136)
Time Spent: 10h 40m  (was: 10.5h)

> Support default Acls for volume, bucket, keys and prefix
> 
>
> Key: HDDS-1544
> URL: https://issues.apache.org/jira/browse/HDDS-1544
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-1544.00.patch
>
>  Time Spent: 10h 40m
>  Remaining Estimate: 0h
>
> Add dAcls for volume, bucket, keys and prefix



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277119&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277119
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:33
Start Date: 16/Jul/19 02:33
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#discussion_r303705636
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -290,7 +290,12 @@ public ScmInfo getScmInfo() throws IOException {
   NodeManager nodeManager = scm.getScmNodeManager();
   Node client = nodeManager.getNode(clientMachine);
   List nodeList = new ArrayList();
-  nodes.stream().forEach(path -> nodeList.add(nodeManager.getNode(path)));
+  nodes.stream().forEach(path -> {
+DatanodeDetails node = nodeManager.getNode(path);
+if (node != null) {
 
 Review comment:
   nodeManager.getNode will return null when it can't find the node in the 
network topology or the node found is not a leaf node.  The first case usually 
is because of network topology is not well configured(such as use hostname as 
network name while query getNode use Ipaddress). The second case usually will 
not happen, otherwise it indicates there is some bugs.  I created a unit test 
case, which provides illegal inputs to reproduce this case. 
   
   The WARN log for all these cases are  in nodeManager.getNode function.  
   
   if (node != null) {
 if (node instanceof InnerNode) {
   LOG.warn("Get node for {} return {}, it's an inner node, " +
   "not a datanode", address, node.getNetworkFullPath());
 } else {
   LOG.debug("Get node for {} return {}", address,
   node.getNetworkFullPath());
   return (DatanodeDetails)node;
 }
   } else {
 LOG.warn("Cannot find node for {}", address);
   }
 return null;
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277119)
Time Spent: 1h  (was: 50m)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.User

[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277115&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277115
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:24
Start Date: 16/Jul/19 02:24
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#discussion_r303705636
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -290,7 +290,12 @@ public ScmInfo getScmInfo() throws IOException {
   NodeManager nodeManager = scm.getScmNodeManager();
   Node client = nodeManager.getNode(clientMachine);
   List nodeList = new ArrayList();
-  nodes.stream().forEach(path -> nodeList.add(nodeManager.getNode(path)));
+  nodes.stream().forEach(path -> {
+DatanodeDetails node = nodeManager.getNode(path);
+if (node != null) {
 
 Review comment:
   nodeManager.getNode will return null when it can't find the node in the 
network topology or the node found is not a leaf node.  The WARN log for all 
these cases are  in nodeManager.getNode function.  
   
   if (node != null) {
 if (node instanceof InnerNode) {
   LOG.warn("Get node for {} return {}, it's an inner node, " +
   "not a datanode", address, node.getNetworkFullPath());
 } else {
   LOG.debug("Get node for {} return {}", address,
   node.getNetworkFullPath());
   return (DatanodeDetails)node;
 }
   } else {
 LOG.warn("Cannot find node for {}", address);
   }
 return null;
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277115)
Time Spent: 50m  (was: 40m)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vx

[jira] [Work logged] (HDDS-1793) Acceptance test of ozone-topology cluster is failing

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1793?focusedWorklogId=277112&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277112
 ]

ASF GitHub Bot logged work on HDDS-1793:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:23
Start Date: 16/Jul/19 02:23
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #1096: HDDS-1793. 
Acceptance test of ozone-topology cluster is failing
URL: https://github.com/apache/hadoop/pull/1096#discussion_r303705545
 
 

 ##
 File path: hadoop-ozone/dist/src/main/compose/testlib.sh
 ##
 @@ -28,7 +28,7 @@ mkdir -p "$RESULT_DIR"
 #Should be writeable from the docker containers where user is different.
 chmod ogu+w "$RESULT_DIR"
 
-## @description wait until 3 datanodes are up (or 30 seconds)
+## @description wait until 3 or more datanodes are up (or 30 seconds)
 ## @param the docker-compose file
 wait_for_datanodes(){
 
 Review comment:
   Thanks for the suggestion.  I have replaced the global variable with a 
function parameter in a new commit.  This also gets rid of the shellcheck 
warning.
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277112)
Time Spent: 1h  (was: 50m)

> Acceptance test of ozone-topology cluster is failing
> 
>
> Key: HDDS-1793
> URL: https://issues.apache.org/jira/browse/HDDS-1793
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Elek, Marton
>Assignee: Doroszlai, Attila
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Since HDDS-1586 the smoketests of the ozone-topology compose file is broken:
> {code:java}
> Output:  
> /tmp/smoketest/ozone-topology/result/robot-ozone-topology-ozone-topology-basic-scm.xml
> must specify at least one container source
> Stopping datanode_2 ... 
> Stopping datanode_3 ... 
> Stopping datanode_4 ... 
> Stopping scm... 
> Stopping om ... 
> Stopping datanode_1 ... 
> 
> Stopping datanode_2 ... done
> 
> Stopping datanode_4 ... done
> 
> Stopping datanode_1 ... done
> 
> Stopping datanode_3 ... done
> 
> Stopping scm... done
> 
> Stopping om ... done
> Removing datanode_2 ... 
> Removing datanode_3 ... 
> Removing datanode_4 ... 
> Removing scm... 
> Removing om ... 
> Removing datanode_1 ... 
> 
> Removing datanode_1 ... done
> 
> Removing om ... done
> 
> Removing datanode_3 ... done
> 
> Removing datanode_4 ... done
> 
> Removing datanode_2 ... done
> 
> Removing scm... done
> Removing network ozone-topology_net
> [ ERROR ] Reading XML source 
> '/var/jenkins_home/workspace/ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-topology/result/robot-*.xml'
>  failed: No such file or directory
> Try --help for usage information.
> ERROR: Test execution of 
> /var/jenkins_home/workspace/ozone/hadoop-ozone/dist/target/ozone-0.5.0-SNAPSHOT/compose/ozone-topology
>  is FAILED{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1787?focusedWorklogId=277114&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277114
 ]

ASF GitHub Bot logged work on HDDS-1787:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:23
Start Date: 16/Jul/19 02:23
Worklog Time Spent: 10m 
  Work Description: ChenSammi commented on pull request #1094: HDDS-1787. 
NPE thrown while trying to find DN closest to client.
URL: https://github.com/apache/hadoop/pull/1094#discussion_r303705636
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/server/SCMBlockProtocolServer.java
 ##
 @@ -290,7 +290,12 @@ public ScmInfo getScmInfo() throws IOException {
   NodeManager nodeManager = scm.getScmNodeManager();
   Node client = nodeManager.getNode(clientMachine);
   List nodeList = new ArrayList();
-  nodes.stream().forEach(path -> nodeList.add(nodeManager.getNode(path)));
+  nodes.stream().forEach(path -> {
+DatanodeDetails node = nodeManager.getNode(path);
+if (node != null) {
 
 Review comment:
   nodeManager.getNode will return null when it can't find the node in the 
network topology or the node found is not a leaf node.  The WARN log for all 
these cases are  in nodeManager.getNode function.  
   
   if (node != null) {
 if (node instanceof InnerNode) {
   LOG.warn("Get node for {} return {}, it's an inner node, " +
   "not a datanode", address, node.getNetworkFullPath());
 } else {
   LOG.debug("Get node for {} return {}", address,
   node.getNetworkFullPath());
   return (DatanodeDetails)node;
 }
   } else {
 LOG.warn("Cannot find node for {}", address);
   }
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 277114)
Time Spent: 40m  (was: 0.5h)

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=

[jira] [Commented] (HDDS-1787) NPE thrown while trying to find DN closest to client

2019-07-15 Thread Sammi Chen (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885781#comment-16885781
 ] 

Sammi Chen commented on HDDS-1787:
--

Hi [~msingh], thanks for the instructions.  I will try it locally.  I also 
created a unit test which reproduced the issue. 

> NPE thrown while trying to find DN closest to client
> 
>
> Key: HDDS-1787
> URL: https://issues.apache.org/jira/browse/HDDS-1787
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> cc: [~xyao] This seems related to the client side topology changes, not sure 
> if some other Jira is already addressing this.
> {code}
> 2019-07-10 16:45:53,176 WARN  ipc.Server (Server.java:logException(2724)) - 
> IPC Server handler 14 on 35066, call Call#127037 Retry#0 
> org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol.send from 17
> 2.31.116.73:52540
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> 2019-07-10 16:45:53,176 WARN  om.KeyManagerImpl 
> (KeyManagerImpl.java:lambda$sortDatanodeInPipeline$7(2129)) - Unable to sort 
> datanodes based on distance to client, volume=xqoyzocpse, bucket=vxwajaczqh, 
> key=pool-444-thread-7-201077822, client=127.0.0.1, 
> datanodes=[10f15723-45d7-4a0c-8f01-8b101744a110{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}, 7ac2777f-0a5c-4414-9e7f-bfbc47d696ea{ip: 172.31.116.73, host: 
> sid-minichaos.gce.cloudera.com, networkLocation: /default-rack, certSerialId: 
> null}], exception=java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.lambda$sortDatanodes$0(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
> at 
> java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:580)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.sortDatanodes(ScmBlockLocationProtocolServerSideTranslatorPB.java:215)
> at 
> org.apache.hadoop.ozone.protocolPB.ScmBlockLocationProtocolServerSideTranslatorPB.send(ScmBlockLocationProtocolServerSideTranslatorPB.java:124)
> at 
> org.apache.hadoop.hdds.protocol.proto.ScmBlockLocationProtocolProtos$ScmBlockLocationProtocolService$2.callBlockingMethod(ScmBlockLocationProtocolProtos.java:13157)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
> at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
> at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
> at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
> {code}



--

[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277111&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277111
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 02:03
Start Date: 16/Jul/19 02:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1100: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1100#issuecomment-511634741
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 98 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 71 | Maven dependency ordering for branch |
   | +1 | mvninstall | 508 | trunk passed |
   | +1 | compile | 267 | trunk passed |
   | +1 | checkstyle | 74 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 974 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 170 | trunk passed |
   | 0 | spotbugs | 342 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 546 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 33 | Maven dependency ordering for patch |
   | +1 | mvninstall | 486 | the patch passed |
   | +1 | compile | 268 | the patch passed |
   | +1 | javac | 268 | the patch passed |
   | +1 | checkstyle | 75 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 732 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 180 | the patch passed |
   | +1 | findbugs | 607 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 352 | hadoop-hdds in the patch passed. |
   | -1 | unit | 232 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 5893 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1100 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux fbba45fe4640 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1411513 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/testReport/ |
   | Max. process+thread count | 1096 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1100/1/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.

[jira] [Commented] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-15 Thread Jinglun (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885770#comment-16885770
 ] 

Jinglun commented on HDFS-14547:


My bad:(, I should check the patch more carefully, very sorry for that.

Upload patch branch2.9.003 and remove all the throw ConstEnumException.

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, 
> HDFS-14547-branch-2.9.002.patch, HDFS-14547-branch-2.9.003.patch, 
> HDFS-14547-design, HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-1802) Add Eviction policy for table cache

2019-07-15 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDDS-1802?focusedWorklogId=277110&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-277110
 ]

ASF GitHub Bot logged work on HDDS-1802:


Author: ASF GitHub Bot
Created on: 16/Jul/19 01:53
Start Date: 16/Jul/19 01:53
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on issue #1099: HDDS-1802. Add 
Eviction policy for table cache.
URL: https://github.com/apache/hadoop/pull/1099#issuecomment-511633015
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 67 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 63 | Maven dependency ordering for branch |
   | +1 | mvninstall | 521 | trunk passed |
   | +1 | compile | 259 | trunk passed |
   | +1 | checkstyle | 63 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 847 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 177 | trunk passed |
   | 0 | spotbugs | 315 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | +1 | findbugs | 515 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 38 | Maven dependency ordering for patch |
   | +1 | mvninstall | 507 | the patch passed |
   | +1 | compile | 274 | the patch passed |
   | +1 | javac | 274 | the patch passed |
   | +1 | checkstyle | 80 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 671 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 154 | the patch passed |
   | +1 | findbugs | 523 | the patch passed |
   ||| _ Other Tests _ |
   | -1 | unit | 325 | hadoop-hdds in the patch failed. |
   | -1 | unit | 231 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 41 | The patch does not generate ASF License warnings. |
   | | | 5531 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware
 |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeDeleteRequest |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketSetPropertyResponse |
   |   | hadoop.ozone.om.request.file.TestOMFileCreateRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetOwnerResponse |
   |   | hadoop.ozone.om.response.bucket.TestOMBucketCreateResponse |
   |   | hadoop.ozone.om.request.key.TestOMKeyCommitRequest |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeCreateResponse |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketCreateRequest |
   |   | hadoop.ozone.om.TestBucketManagerImpl |
   |   | hadoop.ozone.om.TestKeyDeletingService |
   |   | hadoop.ozone.om.request.key.TestOMAllocateBlockRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetQuotaRequest |
   |   | hadoop.ozone.om.request.key.TestOMKeyCreateRequest |
   |   | hadoop.ozone.om.request.volume.TestOMVolumeSetOwnerRequest |
   |   | hadoop.ozone.om.TestS3BucketManager |
   |   | hadoop.ozone.om.request.file.TestOMDirectoryCreateRequest |
   |   | hadoop.ozone.om.request.bucket.TestOMBucketSetPropertyRequest |
   |   | hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse |
   |   | hadoop.ozone.om.response.volume.TestOMVolumeSetQuotaResponse |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=18.09.7 Server=18.09.7 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1099 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux dc94130e6b0d 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 1411513 |
   | Default Java | 1.8.0_212 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/artifact/out/patch-unit-hadoop-hdds.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/artifact/out/patch-unit-hadoop-ozone.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1099/1/testReport/ |
   | Max. process+thread count | 1380 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdds/common hadoop-ozone/ozone-manager

[jira] [Updated] (HDFS-14547) DirectoryWithQuotaFeature.quota costs additional memory even the storage type quota is not set.

2019-07-15 Thread Jinglun (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HDFS-14547:
---
Attachment: HDFS-14547-branch-2.9.003.patch

> DirectoryWithQuotaFeature.quota costs additional memory even the storage type 
> quota is not set.
> ---
>
> Key: HDFS-14547
> URL: https://issues.apache.org/jira/browse/HDFS-14547
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.1.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>
> Attachments: HDFS-14547-branch-2.9.001.patch, 
> HDFS-14547-branch-2.9.002.patch, HDFS-14547-branch-2.9.003.patch, 
> HDFS-14547-design, HDFS-14547-patch003-Test Report.pdf, HDFS-14547.001.patch, 
> HDFS-14547.002.patch, HDFS-14547.003.patch, HDFS-14547.004.patch, 
> HDFS-14547.005.patch, HDFS-14547.006.patch, HDFS-14547.007.patch
>
>
> Our XiaoMi HDFS is considering upgrading from 2.6 to 3.1. We notice the 
> storage type quota 'tsCounts' is instantiated to 
> EnumCounters(StorageType.class), so it will cost a long[5] even 
> if we don't have any storage type quota on this inode(only space quota or 
> name quota).
> In our cluster we have many dirs with quota and the NameNode's memory is in 
> tension, so the additional cost will be a problem.
>  See DirectoryWithQuotaFeature.Builder().
>  
> {code:java}
> class DirectoryWithQuotaFeature$Builder {
>   public Builder() {
>this.quota = new QuotaCounts.Builder().nameSpace(DEFAULT_NAMESPACE_QUOTA).
>storageSpace(DEFAULT_STORAGE_SPACE_QUOTA).
>typeSpaces(DEFAULT_STORAGE_SPACE_QUOTA).build();// set default value -1.
>this.usage = new QuotaCounts.Builder().nameSpace(1).build();
>   }
>   public Builder typeSpaces(long val) {// set default value.
>this.tsCounts.reset(val);
>return this;
>   }
> }
> class QuotaCounts$Builder {
>   public Builder() {
> this.nsSsCounts = new EnumCounters(Quota.class);
> this.tsCounts = new EnumCounters(StorageType.class);
>   }
> }
> class EnumCounters {
>   public EnumCounters(final Class enumClass) {
> final E[] enumConstants = enumClass.getEnumConstants();
> Preconditions.checkNotNull(enumConstants);
> this.enumClass = enumClass;
> this.counters = new long[enumConstants.length];// new a long array here.
>   }
> }
> {code}
> Related to HDFS-14542.
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14593) RBF: Implement deletion feature for expired records in State Store

2019-07-15 Thread Takanobu Asanuma (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16885752#comment-16885752
 ] 

Takanobu Asanuma commented on HDFS-14593:
-

Created HDFS-14654 for the flaky test.

> RBF: Implement deletion feature for expired records in State Store
> --
>
> Key: HDFS-14593
> URL: https://issues.apache.org/jira/browse/HDFS-14593
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HDFS-14593.001.patch, HDFS-14593.002.patch, 
> HDFS-14593.003.patch, HDFS-14593.004.patch, HDFS-14593.005.patch, 
> HDFS-14593.006.patch, HDFS-14593.007.patch, HDFS-14593.008.patch, 
> HDFS-14593.009.patch, HDFS-14593.010.patch, HDFS-14593.011.patch
>
>
> Currently, any router seems to exist in the Router Information eternally.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14654) RBF: TestRouterRpc tests are flaky

2019-07-15 Thread Takanobu Asanuma (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HDFS-14654:

Attachment: error.log

> RBF: TestRouterRpc tests are flaky
> --
>
> Key: HDFS-14654
> URL: https://issues.apache.org/jira/browse/HDFS-14654
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Takanobu Asanuma
>Priority: Major
> Attachments: error.log
>
>
> They sometimes pass and sometimes fail.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   >