[jira] [Commented] (HDDS-4221) Support extra large storage capacity server as datanode

2020-09-24 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201902#comment-17201902
 ] 

Li Cheng commented on HDDS-4221:


There is discussion over RaftClient sharing one gRPC channel on every datanode:

https://issues.apache.org/jira/browse/RATIS-1072


https://issues.apache.org/jira/browse/RATIS-1074

> Support extra large storage capacity server as datanode
> ---
>
> Key: HDDS-4221
> URL: https://issues.apache.org/jira/browse/HDDS-4221
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Priority: Major
> Attachments: image-2020-09-25-12-41-38-113.png
>
>
> There is customer request to support high density storage server as datanode, 
> hardware configuration for example,  96 Core, 32G DDR4 *8, 480G SATA SSD, 
> 25GbE *2 , 60 * 12TB HDD.  
> How to fully utilize the hardware resource and unleash it's power is a big 
> challenge. 
> This umbrella JIRA is created to host all the discussions and next step 
> actions towards the final goal. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4221) Support extra large storage capacity server as datanode

2020-09-24 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-4221:
---
Attachment: image-2020-09-25-12-41-38-113.png

> Support extra large storage capacity server as datanode
> ---
>
> Key: HDDS-4221
> URL: https://issues.apache.org/jira/browse/HDDS-4221
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Priority: Major
> Attachments: image-2020-09-25-12-41-38-113.png
>
>
> There is customer request to support high density storage server as datanode, 
> hardware configuration for example,  96 Core, 32G DDR4 *8, 480G SATA SSD, 
> 25GbE *2 , 60 * 12TB HDD.  
> How to fully utilize the hardware resource and unleash it's power is a big 
> challenge. 
> This umbrella JIRA is created to host all the discussions and next step 
> actions towards the final goal. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4221) Support extra large storage capacity server as datanode

2020-09-24 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17201901#comment-17201901
 ] 

Li Cheng commented on HDDS-4221:


In cosbench via S3 test, it looks like the writing performance performance has 
difference when we test towards single bucket and multiple buckets

The above one is for single bucket and the below one is for 4 buckes.

!image-2020-09-25-12-41-38-113.png!

> Support extra large storage capacity server as datanode
> ---
>
> Key: HDDS-4221
> URL: https://issues.apache.org/jira/browse/HDDS-4221
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Sammi Chen
>Priority: Major
> Attachments: image-2020-09-25-12-41-38-113.png
>
>
> There is customer request to support high density storage server as datanode, 
> hardware configuration for example,  96 Core, 32G DDR4 *8, 480G SATA SSD, 
> 25GbE *2 , 60 * 12TB HDD.  
> How to fully utilize the hardware resource and unleash it's power is a big 
> challenge. 
> This umbrella JIRA is created to host all the discussions and next step 
> actions towards the final goal. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on pull request #1445: HDDS-4272. Volume namespace: add usedNamespace and update it when create and delete bucket

2020-09-24 Thread GitBox


amaliujia commented on pull request #1445:
URL: https://github.com/apache/hadoop-ozone/pull/1445#issuecomment-698558307


   Thanks @cxorm and @captainzmc 
   
   comments are addressed. 
   
   Also created https://issues.apache.org/jira/browse/HDDS-4273 to track the 
work that make `usedNamespace` work with `ozone sh vol info`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1445: HDDS-4272. Volume namespace: add usedNamespace and update it when create and delete bucket

2020-09-24 Thread GitBox


amaliujia commented on a change in pull request #1445:
URL: https://github.com/apache/hadoop-ozone/pull/1445#discussion_r494576276



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
##
@@ -68,15 +69,16 @@
   "builder."})
   private OmVolumeArgs(String adminName, String ownerName, String volume,
   long quotaInBytes, long quotaInCounts, Map metadata,
-  long usedBytes, OmOzoneAclMap aclMap, long creationTime,
-  long modificationTime, long objectID, long updateID) {
+  long usedBytes, long usedNamespace, OmOzoneAclMap aclMap,
+  long creationTime, long modificationTime, long objectID, long updateID) {

Review comment:
   I see. `@param usedNamespace` and updated both description of `@param 
usedNamespace ` and `@param usedBytes`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1445: HDDS-4272. Volume namespace: add usedNamespace and update it when create and delete bucket

2020-09-24 Thread GitBox


amaliujia commented on a change in pull request #1445:
URL: https://github.com/apache/hadoop-ozone/pull/1445#discussion_r494575654



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -201,6 +201,8 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   // Add default acls from volume.
   addDefaultAcls(omBucketInfo, omVolumeArgs);
 
+  // quotaAdd used namespace

Review comment:
   used `update used namespace for volume` for both now.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
##
@@ -66,6 +77,12 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
 omBucketInfo.getBucketName());
 omMetadataManager.getBucketTable().putWithBatch(batchOperation,
 dbBucketKey, omBucketInfo);
+// update volume usedNamespace
+if (omVolumeArgs != null) {
+  omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
+  omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
+  omVolumeArgs);

Review comment:
   makes sense  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1445: HDDS-4272. Volume namespace: add usedNamespace and update it when create and delete bucket

2020-09-24 Thread GitBox


amaliujia commented on a change in pull request #1445:
URL: https://github.com/apache/hadoop-ozone/pull/1445#discussion_r494575930



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
##
@@ -270,6 +270,7 @@ private OzoneConsts() {
   public static final String SRC_KEY = "srcKey";
   public static final String DST_KEY = "dstKey";
   public static final String USED_BYTES = "usedBytes";
+  public static final String USED_NAMESPACE = "usedNamespace";

Review comment:
   Indeed it is not used. Removed this constant and we can add it in the 
future when there is a need.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -209,7 +211,7 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   omResponse.setCreateBucketResponse(
   CreateBucketResponse.newBuilder().build());
   omClientResponse = new OMBucketCreateResponse(omResponse.build(),
-  omBucketInfo);
+  omBucketInfo, omVolumeArgs);

Review comment:
   +1





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on pull request #1298: HDDS-3869. Use different column families for datanode block and metadata

2020-09-24 Thread GitBox


hanishakoneru commented on pull request #1298:
URL: https://github.com/apache/hadoop-ozone/pull/1298#issuecomment-698555281


   @errose28, the unit test failures seem to be related to the patch. Can you 
please check.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4194) Create a script to check AWS S3 compatibility

2020-09-24 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4194:
---
Labels:   (was: pull-request-available)

> Create a script to check AWS S3 compatibility
> -
>
> Key: HDDS-4194
> URL: https://issues.apache.org/jira/browse/HDDS-4194
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Fix For: 1.1.0
>
>
> Ozone S3G implements the REST interface of AWS S3 protocol. Our robot test 
> based scripts check if it's possible to use Ozone S3 with the AWS client tool.
> But occasionally we should check if our robot test definitions are valid: 
> robot tests should be executed with using real AWS endpoint and bucket(s) and 
> all the test cases should be passed.
> This patch provides a simple shell script to make this cross-check easier.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4194) Create a script to check AWS S3 compatibility

2020-09-24 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4194.

Fix Version/s: 1.1.0
   Resolution: Implemented

> Create a script to check AWS S3 compatibility
> -
>
> Key: HDDS-4194
> URL: https://issues.apache.org/jira/browse/HDDS-4194
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Ozone S3G implements the REST interface of AWS S3 protocol. Our robot test 
> based scripts check if it's possible to use Ozone S3 with the AWS client tool.
> But occasionally we should check if our robot test definitions are valid: 
> robot tests should be executed with using real AWS endpoint and bucket(s) and 
> all the test cases should be passed.
> This patch provides a simple shell script to make this cross-check easier.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1383: HDDS-4194. Create a script to check AWS S3 compatibility

2020-09-24 Thread GitBox


adoroszlai merged pull request #1383:
URL: https://github.com/apache/hadoop-ozone/pull/1383


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4273) `usedNamespace` works by `ozone sh vol info`

2020-09-24 Thread Rui Wang (Jira)
Rui Wang created HDDS-4273:
--

 Summary: `usedNamespace` works by `ozone sh vol info`
 Key: HDDS-4273
 URL: https://issues.apache.org/jira/browse/HDDS-4273
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Rui Wang
Assignee: Rui Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4120) Implement cleanup service for OM open key table

2020-09-24 Thread Ethan Rose (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Rose updated HDDS-4120:
-
Description: 
Currently, uncommitted keys in the OM open key table remain there until they 
are committed. A background service should periodically run to remove open keys 
and their associated blocks from memory if the key is past a certain age. This 
value will be configurable with the existing ozone.open.key.expire.threshold 
setting, which currently has a default value of 1 day. Any uncommitted key in 
the open key table older than this will be marked for deletion, and cleaned up 
with the existing OM key deleting service. A configurable value should limit 
the number of open keys that can be removed in one run of the service.

[Design 
Document|https://docs.google.com/document/d/1pEczSN8O0T60UMHF2GYX0PJbfzpaHPTbsEkOoBdBwQE/edit?usp=sharing]
 

 

  was:
Currently, uncommitted keys in the OM open key table remain there until they 
are committed. A background service should periodically run to remove open keys 
and their associated blocks from memory if the key is past a certain age. This 
value will be configurable with the existing ozone.open.key.expire.threshold 
setting, which currently has a default value of 1 day. Any uncommitted key in 
the open key table older than this will be marked for deletion, and cleaned up 
with the existing OM key deleting service. A configurable value should limit 
the number of open keys that can be removed in one run of the service.

 

[Design 
Document|https://docs.google.com/document/d/1UgXA27NGBMmTfvrImYgLQtiCfHqbFGDwv0JKv3pJH6E/edit?usp=sharing]


> Implement cleanup service for OM open key table
> ---
>
> Key: HDDS-4120
> URL: https://issues.apache.org/jira/browse/HDDS-4120
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Major
>
> Currently, uncommitted keys in the OM open key table remain there until they 
> are committed. A background service should periodically run to remove open 
> keys and their associated blocks from memory if the key is past a certain 
> age. This value will be configurable with the existing 
> ozone.open.key.expire.threshold setting, which currently has a default value 
> of 1 day. Any uncommitted key in the open key table older than this will be 
> marked for deletion, and cleaned up with the existing OM key deleting 
> service. A configurable value should limit the number of open keys that can 
> be removed in one run of the service.
> [Design 
> Document|https://docs.google.com/document/d/1pEczSN8O0T60UMHF2GYX0PJbfzpaHPTbsEkOoBdBwQE/edit?usp=sharing]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4270) Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-24 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4270:
---
Labels:   (was: pull-request-available)

> Add more reusable byteman scripts to debug ofs/o3fs performance
> ---
>
> Key: HDDS-4270
> URL: https://issues.apache.org/jira/browse/HDDS-4270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Fix For: 1.1.0
>
>
> I am using https://byteman.jboss.org to debug the performance of spark + 
> terage with different scripts. Some byteman scripts are already shared by 
> HDDS-4095 or HDDS-342 but it seems to be a good practice to share the newer 
> scripts to make it possible to reproduce performance problems.
> For using byteman with Ozone, see this video:
> https://www.youtube.com/watch?v=_4eYsH8F50E=PLCaV-jpCBO8U_WqyySszmbmnL-dhlzF6o=5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4270) Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-24 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4270.

Fix Version/s: 1.1.0
   Resolution: Implemented

> Add more reusable byteman scripts to debug ofs/o3fs performance
> ---
>
> Key: HDDS-4270
> URL: https://issues.apache.org/jira/browse/HDDS-4270
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> I am using https://byteman.jboss.org to debug the performance of spark + 
> terage with different scripts. Some byteman scripts are already shared by 
> HDDS-4095 or HDDS-342 but it seems to be a good practice to share the newer 
> scripts to make it possible to reproduce performance problems.
> For using byteman with Ozone, see this video:
> https://www.youtube.com/watch?v=_4eYsH8F50E=PLCaV-jpCBO8U_WqyySszmbmnL-dhlzF6o=5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1443: HDDS-4270. Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-24 Thread GitBox


adoroszlai commented on pull request #1443:
URL: https://github.com/apache/hadoop-ozone/pull/1443#issuecomment-698385899


   Thanks @elek for updating the patch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1443: HDDS-4270. Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-24 Thread GitBox


adoroszlai merged pull request #1443:
URL: https://github.com/apache/hadoop-ozone/pull/1443


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1445: HDDS-4272. Volume namespace: add usedNamespace and update it when create and delete bucket

2020-09-24 Thread GitBox


captainzmc commented on a change in pull request #1445:
URL: https://github.com/apache/hadoop-ozone/pull/1445#discussion_r494297018



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketDeleteResponse.java
##
@@ -64,6 +76,12 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
 omMetadataManager.getBucketKey(volumeName, bucketName);
 omMetadataManager.getBucketTable().deleteWithBatch(batchOperation,
 dbBucketKey);
+// update volume usedNamespace
+if (omVolumeArgs != null) {
+  omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
+  omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
+  omVolumeArgs);

Review comment:
   indent





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on a change in pull request #1443: HDDS-4270. Add more reusable byteman scripts to debug ofs/o3fs performance

2020-09-24 Thread GitBox


elek commented on a change in pull request #1443:
URL: https://github.com/apache/hadoop-ozone/pull/1443#discussion_r494239916



##
File path: dev-support/byteman/watchforcommit_all.btm
##
@@ -0,0 +1,47 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Measure overall time spent in Watch for commit calls
+
+RULE FileSystem.close
+CLASS org.apache.hadoop.fs.FileSystem
+METHOD close
+IF TRUE
+DO
+  System.out.println("Closing file system instance: " + 
System.identityHashCode($0));
+  System.out.println("   watchForCommit.call: " + 
readCounter("watchForCommit.call"));
+  System.out.println("   watchForCommit.allTime: " + 
readCounter("watchForCommit.allTime"))
+
+ENDRULE
+
+RULE BlockOutputStream.watchForCommit.Entry

Review comment:
   Yes, the chance is low, but let me add the same prefix to here, just to 
be on the safe side...





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2660) Create insight point for datanode container protocol

2020-09-24 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2660:
---
Labels:   (was: pull-request-available)

> Create insight point for datanode container protocol
> 
>
> Key: HDDS-2660
> URL: https://issues.apache.org/jira/browse/HDDS-2660
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Marton Elek
>Priority: Major
> Fix For: 1.1.0
>
>
> The goal of this task is to create a new insight point for the datanode 
> container protocol ({{HddsDispatcher}}) to be able to debug 
> {{client<->datanode}} communication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2660) Create insight point for datanode container protocol

2020-09-24 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2660.

Fix Version/s: 1.1.0
   Resolution: Implemented

> Create insight point for datanode container protocol
> 
>
> Key: HDDS-2660
> URL: https://issues.apache.org/jira/browse/HDDS-2660
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> The goal of this task is to create a new insight point for the datanode 
> container protocol ({{HddsDispatcher}}) to be able to debug 
> {{client<->datanode}} communication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1272: HDDS-2660. Create insight point for datanode container protocol

2020-09-24 Thread GitBox


adoroszlai merged pull request #1272:
URL: https://github.com/apache/hadoop-ozone/pull/1272


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3297) TestOzoneClientKeyGenerator is flaky

2020-09-24 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-3297.
---
Fix Version/s: 1.1.0
 Assignee: Aryan Gupta
   Resolution: Fixed

> TestOzoneClientKeyGenerator is flaky
> 
>
> Key: HDDS-3297
> URL: https://issues.apache.org/jira/browse/HDDS-3297
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Marton Elek
>Assignee: Aryan Gupta
>Priority: Critical
>  Labels: TriagePending, flaky-test, ozone-flaky-test, 
> pull-request-available
> Fix For: 1.1.0
>
> Attachments: 
> org.apache.hadoop.ozone.freon.TestOzoneClientKeyGenerator-output.txt
>
>
> Sometimes it's hanging and stopped after a timeout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant merged pull request #1442: HDDS-3297. Enable TestOzoneClientKeyGenerator.

2020-09-24 Thread GitBox


bshashikant merged pull request #1442:
URL: https://github.com/apache/hadoop-ozone/pull/1442


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-24 Thread GitBox


sodonnel commented on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-698259315


   Sorry for the slow reply on this. I have been caught up on some other things.
   
   > After a second thought, deleting the container record in SCM DB 
immediately while keep it in memory maybe a better and clean choice. So if 
there is stale container replica, it can be deleted based on in memory 
information.
   
   I think this is a good enough idea for now. If SCM is up for a very long 
time, perhaps in the future we will want to add a thread to clear all the in 
memory DELETED containers. One small concern is that if a container goes 
DELETED and then SCM is restarted soon after. Then a DN is restarted and 
reports a stale replica, it will just be seen as an unknown container. The 
default position there, is to log a warning. The config 
hdds.scm.unknown-container.action controls this. This is all an edge case - 
most of the time, all DNs should be up anyway.
   
   I left just one comment on a suggested refactor in the container report 
handler, when dealing with replicas from a DELETED container.
   
   Could you also add a test in TestContainerReportHander to check the logic 
around deleting a replica from a DELETED container?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1445: HDDS-4272. Volume namespace: add usedNamespace and update it when create and delete bucket

2020-09-24 Thread GitBox


cxorm commented on a change in pull request #1445:
URL: https://github.com/apache/hadoop-ozone/pull/1445#discussion_r494095636



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmVolumeArgs.java
##
@@ -68,15 +69,16 @@
   "builder."})
   private OmVolumeArgs(String adminName, String ownerName, String volume,
   long quotaInBytes, long quotaInCounts, Map metadata,
-  long usedBytes, OmOzoneAclMap aclMap, long creationTime,
-  long modificationTime, long objectID, long updateID) {
+  long usedBytes, long usedNamespace, OmOzoneAclMap aclMap,
+  long creationTime, long modificationTime, long objectID, long updateID) {

Review comment:
   How about we update the comment of this constructor by add `@param 
usedNamespace - volume quota usage in counts`
   
   The description of the parameter is IMHO, feel free to correct it if you 
have an idea. 


##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
##
@@ -270,6 +270,7 @@ private OzoneConsts() {
   public static final String SRC_KEY = "srcKey";
   public static final String DST_KEY = "dstKey";
   public static final String USED_BYTES = "usedBytes";
+  public static final String USED_NAMESPACE = "usedNamespace";

Review comment:
   Seems we don't use this variable, 
   Could you be so kind as to let me know its usage if I miss something.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketDeleteRequest.java
##
@@ -134,6 +135,12 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
   omResponse.setDeleteBucketResponse(
   DeleteBucketResponse.newBuilder().build());
 
+  // update used namespace for volumn

Review comment:
   ```suggestion
 // update used namespace for volume
   ```

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -209,7 +211,7 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   omResponse.setCreateBucketResponse(
   CreateBucketResponse.newBuilder().build());
   omClientResponse = new OMBucketCreateResponse(omResponse.build(),
-  omBucketInfo);
+  omBucketInfo, omVolumeArgs);

Review comment:
   ```suggestion
 omClientResponse = new OMBucketCreateResponse(omResponse.build(),
 omBucketInfo, omVolumeArgs);
   ```
   It's my nit : could we use less indent here to keep consistent with this file

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -201,6 +201,8 @@ public OMClientResponse validateAndUpdateCache(OzoneManager 
ozoneManager,
   // Add default acls from volume.
   addDefaultAcls(omBucketInfo, omVolumeArgs);
 
+  // quotaAdd used namespace

Review comment:
   How about we update it this to `add quota of used namespace` or the same 
as `update used namespace for volume` in `OMBucketDeleteRequest`  ?

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/bucket/OMBucketCreateResponse.java
##
@@ -66,6 +77,12 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
 omBucketInfo.getBucketName());
 omMetadataManager.getBucketTable().putWithBatch(batchOperation,
 dbBucketKey, omBucketInfo);
+// update volume usedNamespace
+if (omVolumeArgs != null) {
+  omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
+  omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
+  omVolumeArgs);

Review comment:
   ```suggestion
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
   ```
   Just nits.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-24 Thread GitBox


sodonnel commented on a change in pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#discussion_r494180772



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/AbstractContainerReportHandler.java
##
@@ -96,13 +101,24 @@ protected void processContainerReplica(final 
DatanodeDetails datanodeDetails,
*/
   private void updateContainerStats(final DatanodeDetails datanodeDetails,
 final ContainerID containerId,
-final ContainerReplicaProto replicaProto)
+final ContainerReplicaProto replicaProto,
+final EventPublisher publisher)
   throws ContainerNotFoundException {
+final ContainerInfo containerInfo = containerManager
+.getContainer(containerId);
 
-if (isHealthy(replicaProto::getState)) {
-  final ContainerInfo containerInfo = containerManager
-  .getContainer(containerId);
+if (containerInfo.getState() == HddsProtos.LifeCycleState.DELETED) {

Review comment:
   It doesn't seem correct to put the logic to delete the replica for the 
DELETED container inside `updateContainerStats`.
   
   In `ContainerReportHandler#processContainerReplicas(..)` there is logic to 
delete an unknown container in the exception handler.
   
   Could we extract this into a new method, which is called from the exception 
handler. Then in `AbstractContainerReportHandler#updateContainerState(...)` 
handle the containers which should be deleted in the "case DELETED" branch of 
the swith statement. It could call that same extracted method - that way the 
logic to form the DeleteContainer command will be the same for both? It also 
seems more logical to put the delete inside UpdateContainerState rather than 
UpdateContainerStats.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-24 Thread GitBox


captainzmc commented on pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#issuecomment-698235132


   Thanks for @ChenSammi ‘s review. The review issues have been fixed.  Could 
you help take another look?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-24 Thread GitBox


captainzmc commented on pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#issuecomment-698132943


Hi all, status updates: rebase PR and resolve conflicts. This PR can be 
reviewed again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org