[GitHub] [ozone] bharatviswa504 commented on a change in pull request #1489: HDDS-4308. Fix issue with quota update

2020-10-27 Thread GitBox


bharatviswa504 commented on a change in pull request #1489:
URL: https://github.com/apache/ozone/pull/1489#discussion_r513169793



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
##
@@ -597,27 +596,40 @@ protected boolean checkDirectoryAlreadyExists(String 
volumeName,
   }
 
   /**
-   * Return volume info for the specified volume. If the volume does not
-   * exist, returns {@code null}.
+   * Return volume info that updated usageBytes for the specified volume.
* @param omMetadataManager
* @param volume
+   * @param updateUsage
* @return OmVolumeArgs
* @throws IOException
*/
-  protected OmVolumeArgs getVolumeInfo(OMMetadataManager omMetadataManager,
-  String volume) {
-
-OmVolumeArgs volumeArgs = null;
-
-CacheValue value =
-omMetadataManager.getVolumeTable().getCacheValue(
-new CacheKey<>(omMetadataManager.getVolumeKey(volume)));
-
-if (value != null) {
-  volumeArgs = value.getCacheValue();
-}
+  protected static synchronized OmVolumeArgs syncUpdateUsage(
+  OMMetadataManager omMetadataManager, String volume, long updateUsage) {
+OmVolumeArgs volumeArgs = omMetadataManager.getVolumeTable().getCacheValue(
+new CacheKey<>(omMetadataManager.getVolumeKey(volume)))
+.getCacheValue();
+volumeArgs.getUsedBytes().add(updateUsage);
+return volumeArgs.copyObject();
+  }
 
-return volumeArgs;
+  /**
+   * Return volume info that updated usageBytes for the specified volume. And
+   * check Volume usageBytes quota.
+   * @param omMetadataManager
+   * @param volume
+   * @param updateUsage
+   * @return OmVolumeArgs
+   * @throws IOException
+   */
+  protected static synchronized OmVolumeArgs syncCheckAndUpdateUsage(

Review comment:
   Do we need LongAdder still?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [ozone] bharatviswa504 commented on a change in pull request #1489: HDDS-4308. Fix issue with quota update

2020-10-27 Thread GitBox


bharatviswa504 commented on a change in pull request #1489:
URL: https://github.com/apache/ozone/pull/1489#discussion_r513169229



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
##
@@ -597,27 +596,40 @@ protected boolean checkDirectoryAlreadyExists(String 
volumeName,
   }
 
   /**
-   * Return volume info for the specified volume. If the volume does not
-   * exist, returns {@code null}.
+   * Return volume info that updated usageBytes for the specified volume.
* @param omMetadataManager
* @param volume
+   * @param updateUsage
* @return OmVolumeArgs
* @throws IOException
*/
-  protected OmVolumeArgs getVolumeInfo(OMMetadataManager omMetadataManager,
-  String volume) {
-
-OmVolumeArgs volumeArgs = null;
-
-CacheValue value =
-omMetadataManager.getVolumeTable().getCacheValue(
-new CacheKey<>(omMetadataManager.getVolumeKey(volume)));
-
-if (value != null) {
-  volumeArgs = value.getCacheValue();
-}
+  protected static synchronized OmVolumeArgs syncUpdateUsage(
+  OMMetadataManager omMetadataManager, String volume, long updateUsage) {
+OmVolumeArgs volumeArgs = omMetadataManager.getVolumeTable().getCacheValue(
+new CacheKey<>(omMetadataManager.getVolumeKey(volume)))
+.getCacheValue();
+volumeArgs.getUsedBytes().add(updateUsage);
+return volumeArgs.copyObject();
+  }
 
-return volumeArgs;
+  /**
+   * Return volume info that updated usageBytes for the specified volume. And
+   * check Volume usageBytes quota.
+   * @param omMetadataManager
+   * @param volume
+   * @param updateUsage
+   * @return OmVolumeArgs
+   * @throws IOException
+   */
+  protected static synchronized OmVolumeArgs syncCheckAndUpdateUsage(

Review comment:
   I have not understood one part, how without volume lock this will help 
here.  
   
   Because other threads can be updating volumeArgs when this update is 
happening/ other threads read omVolumeArgs.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [ozone] bharatviswa504 commented on a change in pull request #1498: HDDS-4339. Allow AWSSignatureProcessor init when aws signature is absent.

2020-10-27 Thread GitBox


bharatviswa504 commented on a change in pull request #1498:
URL: https://github.com/apache/ozone/pull/1498#discussion_r513166313



##
File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
##
@@ -80,9 +81,15 @@ private OzoneClient getClient(OzoneConfiguration config) 
throws IOException {
   UserGroupInformation remoteUser =

Review comment:
   This will throw NPE when awsAccessId is null, do we need to move 
validateAccessId before remoteUser creation.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4396) Ozone TLP - update documents

2020-10-27 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-4396:


 Summary: Ozone TLP - update documents
 Key: HDDS-4396
 URL: https://issues.apache.org/jira/browse/HDDS-4396
 Project: Hadoop Distributed Data Store
  Issue Type: Task
Reporter: Sammi Chen


Ozone is approved to become an apache TLP.  

There is need to update Ozone documents, changing from "Apache Hadoop Ozone" to 
"Apache Ozone".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4396) Ozone TLP - update documents

2020-10-27 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HDDS-4396:
-
Target Version/s: 1.1.0

> Ozone TLP - update documents
> 
>
> Key: HDDS-4396
> URL: https://issues.apache.org/jira/browse/HDDS-4396
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Sammi Chen
>Priority: Major
>
> Ozone is approved to become an apache TLP.  
> There is need to update Ozone documents, changing from "Apache Hadoop Ozone" 
> to "Apache Ozone".



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [ozone] captainzmc commented on pull request #1497: HDDS-4345. Replace the deprecated Lock method

2020-10-27 Thread GitBox


captainzmc commented on pull request #1497:
URL: https://github.com/apache/ozone/pull/1497#issuecomment-717654070


   Thanks for @xiaoyuyao’s  feedback. The issues has been fixed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [ozone] captainzmc commented on a change in pull request #1489: HDDS-4308. Fix issue with quota update

2020-10-27 Thread GitBox


captainzmc commented on a change in pull request #1489:
URL: https://github.com/apache/ozone/pull/1489#discussion_r513140628



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
##
@@ -597,27 +596,40 @@ protected boolean checkDirectoryAlreadyExists(String 
volumeName,
   }
 
   /**
-   * Return volume info for the specified volume. If the volume does not
-   * exist, returns {@code null}.
+   * Return volume info that updated usageBytes for the specified volume.
* @param omMetadataManager
* @param volume
+   * @param updateUsage
* @return OmVolumeArgs
* @throws IOException
*/
-  protected OmVolumeArgs getVolumeInfo(OMMetadataManager omMetadataManager,
-  String volume) {
-
-OmVolumeArgs volumeArgs = null;
-
-CacheValue value =
-omMetadataManager.getVolumeTable().getCacheValue(
-new CacheKey<>(omMetadataManager.getVolumeKey(volume)));
-
-if (value != null) {
-  volumeArgs = value.getCacheValue();
-}
+  protected static synchronized OmVolumeArgs syncUpdateUsage(
+  OMMetadataManager omMetadataManager, String volume, long updateUsage) {
+OmVolumeArgs volumeArgs = omMetadataManager.getVolumeTable().getCacheValue(
+new CacheKey<>(omMetadataManager.getVolumeKey(volume)))
+.getCacheValue();
+volumeArgs.getUsedBytes().add(updateUsage);
+return volumeArgs.copyObject();
+  }
 
-return volumeArgs;
+  /**
+   * Return volume info that updated usageBytes for the specified volume. And
+   * check Volume usageBytes quota.
+   * @param omMetadataManager
+   * @param volume
+   * @param updateUsage
+   * @return OmVolumeArgs
+   * @throws IOException
+   */
+  protected static synchronized OmVolumeArgs syncCheckAndUpdateUsage(

Review comment:
   Thanks for @linyiqun's review.
   Modifying getVolumeInfo to synchronized and get copyObject would not 
suffice. There is only one instance of volumeArgs in memory, and we need to 
update volumeArgs atomic after getVolumeInfo. Then get the value of the 
copyObject.
   So, I made a modification based on your suggestion, added update 
volumeArgs's usedBytes to the getVolumeInfo method.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4333) Ozone supports append operation

2020-10-27 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221752#comment-17221752
 ] 

Arpit Agarwal commented on HDDS-4333:
-

I don't think we should implement the append/truncate features right now. I 
think we should focus on stability and performance of the write pipeline and 
recent features like multi-RAFT.

> Ozone supports append operation
> ---
>
> Key: HDDS-4333
> URL: https://issues.apache.org/jira/browse/HDDS-4333
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: mingchao zhao
>Priority: Major
>
> Currently HDDS does not support modifying append operations on data. We had 
> this need in production, so we needed to make HDDS support this feature



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on a change in pull request #1515: HDDS-4373. [Design] Ozone support append operation

2020-10-27 Thread GitBox


arp7 commented on a change in pull request #1515:
URL: https://github.com/apache/hadoop-ozone/pull/1515#discussion_r513026161



##
File path: hadoop-hdds/docs/content/design/append.md
##
@@ -0,0 +1,87 @@
+---
+title: Append
+summary: Append to the existing key.
+date: 2020-10-22
+jira: HDDS-4333
+status: implementing
+author: captainzmc
+---
+
+
+## Introduction
+This is a proposal to introduce append operation for Ozone, which will allow 
write data in the tail of an existing file.
+ 
+## Goals
+ OzoneClient and OzoneFS Client support append operation. 
+ While the original key is appended to the write, the key needs to be readable 
by other clients.  
+ After the OutputStream of the new Append operation calls close, other clients 
can read the new Append content. This ensures consistency of read operations.
+## Non-goals
+The operation of hflush is not within the scope of this design. Created 
HDDS-4353 to discuss this.
+## Related jira
+https://issues.apache.org/jira/browse/HDDS-4333
+## Implementation
+### Background conditions:
+We can't currently open a closed Container. If append generates a new block 
every time, the key may have many smaller blocks less than 256MB(Default block 
size). Too many blocks will make the DB larger and also have an impact on read 
performance.
+
+### Solution:
+When Append occurs, determine if the container for the last block is closed. 
If it's closed, we create a new block. if it's open we append data to the last 
block. This can avoid creating new blocks as much as possible.
+   

   
+### Request process:
+![avatar](doc-image/append.png)
+
+ 1. Client executes append key operation to OM
+
+ 2. OM checks if the key is in appendTable; if so, the key is being called by 
another client append. we cannot append this key at this point. If not, add the 
key to appendTable.
+
+ 3. Check whether the last block of the key belongs to a closed container, if 
so, apply to SCM allocate a new block, if not, use the current block directly.

Review comment:
   Blocks must be immutable, we should never modify the contents of a block.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on a change in pull request #1515: HDDS-4373. [Design] Ozone support append operation

2020-10-27 Thread GitBox


arp7 commented on a change in pull request #1515:
URL: https://github.com/apache/hadoop-ozone/pull/1515#discussion_r513025813



##
File path: hadoop-hdds/docs/content/design/append.md
##
@@ -0,0 +1,87 @@
+---
+title: Append
+summary: Append to the existing key.
+date: 2020-10-22
+jira: HDDS-4333
+status: implementing
+author: captainzmc
+---
+
+
+## Introduction
+This is a proposal to introduce append operation for Ozone, which will allow 
write data in the tail of an existing file.
+ 
+## Goals
+ OzoneClient and OzoneFS Client support append operation. 
+ While the original key is appended to the write, the key needs to be readable 
by other clients.  
+ After the OutputStream of the new Append operation calls close, other clients 
can read the new Append content. This ensures consistency of read operations.
+## Non-goals
+The operation of hflush is not within the scope of this design. Created 
HDDS-4353 to discuss this.
+## Related jira
+https://issues.apache.org/jira/browse/HDDS-4333
+## Implementation
+### Background conditions:
+We can't currently open a closed Container. If append generates a new block 
every time, the key may have many smaller blocks less than 256MB(Default block 
size). Too many blocks will make the DB larger and also have an impact on read 
performance.
+
+### Solution:
+When Append occurs, determine if the container for the last block is closed. 
If it's closed, we create a new block. if it's open we append data to the last 
block. This can avoid creating new blocks as much as possible.
+   

   
+### Request process:
+![avatar](doc-image/append.png)
+
+ 1. Client executes append key operation to OM
+
+ 2. OM checks if the key is in appendTable; if so, the key is being called by 
another client append. we cannot append this key at this point. If not, add the 
key to appendTable.

Review comment:
   Why not have the last append win?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde commented on pull request #1507: HDDS-4307.Start Trash Emptier in Ozone Manager

2020-10-27 Thread GitBox


prashantpogde commented on pull request #1507:
URL: https://github.com/apache/hadoop-ozone/pull/1507#issuecomment-717415113


   +1 LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3731) add storage space quota doc

2020-10-27 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-3731.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

Thanks [~simonss] for the contribution. The PR has been merged. 

> add storage space quota doc
> ---
>
> Key: HDDS-3731
> URL: https://issues.apache.org/jira/browse/HDDS-3731
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #1516: HDDS-3731. [doc]add storage space quota document.

2020-10-27 Thread GitBox


xiaoyuyao merged pull request #1516:
URL: https://github.com/apache/hadoop-ozone/pull/1516


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1516: HDDS-3731. [doc]add storage space quota document.

2020-10-27 Thread GitBox


xiaoyuyao commented on pull request #1516:
URL: https://github.com/apache/hadoop-ozone/pull/1516#issuecomment-717361668


   Thanks @captainzmc  for the update. LGTM, +1. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1489: HDDS-4308. Fix issue with quota update

2020-10-27 Thread GitBox


linyiqun commented on a change in pull request #1489:
URL: https://github.com/apache/hadoop-ozone/pull/1489#discussion_r512749950



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
##
@@ -597,27 +596,40 @@ protected boolean checkDirectoryAlreadyExists(String 
volumeName,
   }
 
   /**
-   * Return volume info for the specified volume. If the volume does not
-   * exist, returns {@code null}.
+   * Return volume info that updated usageBytes for the specified volume.
* @param omMetadataManager
* @param volume
+   * @param updateUsage
* @return OmVolumeArgs
* @throws IOException
*/
-  protected OmVolumeArgs getVolumeInfo(OMMetadataManager omMetadataManager,
-  String volume) {
-
-OmVolumeArgs volumeArgs = null;
-
-CacheValue value =
-omMetadataManager.getVolumeTable().getCacheValue(
-new CacheKey<>(omMetadataManager.getVolumeKey(volume)));
-
-if (value != null) {
-  volumeArgs = value.getCacheValue();
-}
+  protected static synchronized OmVolumeArgs syncUpdateUsage(
+  OMMetadataManager omMetadataManager, String volume, long updateUsage) {
+OmVolumeArgs volumeArgs = omMetadataManager.getVolumeTable().getCacheValue(
+new CacheKey<>(omMetadataManager.getVolumeKey(volume)))
+.getCacheValue();
+volumeArgs.getUsedBytes().add(updateUsage);
+return volumeArgs.copyObject();
+  }
 
-return volumeArgs;
+  /**
+   * Return volume info that updated usageBytes for the specified volume. And
+   * check Volume usageBytes quota.
+   * @param omMetadataManager
+   * @param volume
+   * @param updateUsage
+   * @return OmVolumeArgs
+   * @throws IOException
+   */
+  protected static synchronized OmVolumeArgs syncCheckAndUpdateUsage(

Review comment:
   I don't prefer to add new methods here, this makes current PR not clear 
to understand.
   @captainzmc , can you make the minor adjustment for getVolumeInfo as I 
suggested in JIRA HDDS-4308. After this, we can make few lines change I think.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1489: HDDS-4308. Fix issue with quota update

2020-10-27 Thread GitBox


linyiqun commented on a change in pull request #1489:
URL: https://github.com/apache/hadoop-ozone/pull/1489#discussion_r512749950



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
##
@@ -597,27 +596,40 @@ protected boolean checkDirectoryAlreadyExists(String 
volumeName,
   }
 
   /**
-   * Return volume info for the specified volume. If the volume does not
-   * exist, returns {@code null}.
+   * Return volume info that updated usageBytes for the specified volume.
* @param omMetadataManager
* @param volume
+   * @param updateUsage
* @return OmVolumeArgs
* @throws IOException
*/
-  protected OmVolumeArgs getVolumeInfo(OMMetadataManager omMetadataManager,
-  String volume) {
-
-OmVolumeArgs volumeArgs = null;
-
-CacheValue value =
-omMetadataManager.getVolumeTable().getCacheValue(
-new CacheKey<>(omMetadataManager.getVolumeKey(volume)));
-
-if (value != null) {
-  volumeArgs = value.getCacheValue();
-}
+  protected static synchronized OmVolumeArgs syncUpdateUsage(
+  OMMetadataManager omMetadataManager, String volume, long updateUsage) {
+OmVolumeArgs volumeArgs = omMetadataManager.getVolumeTable().getCacheValue(
+new CacheKey<>(omMetadataManager.getVolumeKey(volume)))
+.getCacheValue();
+volumeArgs.getUsedBytes().add(updateUsage);
+return volumeArgs.copyObject();
+  }
 
-return volumeArgs;
+  /**
+   * Return volume info that updated usageBytes for the specified volume. And
+   * check Volume usageBytes quota.
+   * @param omMetadataManager
+   * @param volume
+   * @param updateUsage
+   * @return OmVolumeArgs
+   * @throws IOException
+   */
+  protected static synchronized OmVolumeArgs syncCheckAndUpdateUsage(

Review comment:
   I don't prefer to add new methods here, this makes current PR not clear 
to understand.
   @captainzmc , can you make the minor adjustment for getVolumeInfo as I 
suggested in JIRA HDDS-4308.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #1523: HDDS-4320. Let Ozone input streams implement CanUnbuffer

2020-10-27 Thread GitBox


bshashikant commented on pull request #1523:
URL: https://github.com/apache/hadoop-ozone/pull/1523#issuecomment-717257127


   Thanks @adoroszlai . I am still reviewing this, however, i have couple of 
questions:
   1) In unbuffer, do we need to remove the corresponding blockInputStreams and 
chunkInputStreams which are already read?
   2) While the connection to datanode is closed, connection to OM still is 
kept open, do we need to close this as well?
   3) As i see the last position is cached, so after unbuffer is called, it 
will go into corresponding blockInputStream and chunkInputStream and starts 
reading again. What if the pipelne is not valid anymore i.e, the datanodes 
containing the blocks is replicated to a different set of datanodes? Do we need 
to handle this?
   
   As far as i remember , we only refresh the pipeline during initialisation of 
blockInputStream only.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4386) Each EndpointStateMachine uses its own thread pool to talk with SCM/Recon

2020-10-27 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-4386.
-
Resolution: Fixed

Merged the PR. Thanks for the fix [~glengeng].

> Each EndpointStateMachine uses its own thread pool to talk with SCM/Recon
> -
>
> Key: HDDS-4386
> URL: https://issues.apache.org/jira/browse/HDDS-4386
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Blocker
>  Labels: pull-request-available
>
> In Tencent production environment, after start Recon for a while, we got 
> warnings that all DNs become stale/dead at SCM side. After kill recon, all 
> DNs become healthy in a very short time.
>  
> *The root cause is:*
> 1) EndpointStateMachine for SCM and that for Recon share the thread pool 
> created by DatanodeStateMachine, which is a fixed size thread pool:
> {code:java}
> executorService = Executors.newFixedThreadPool(
> getEndPointTaskThreadPoolSize(),
> new ThreadFactoryBuilder()
> .setNameFormat("Datanode State Machine Task Thread - %d").build());
> private int getEndPointTaskThreadPoolSize() {
>   // TODO(runzhiwang): current only support one recon, if support multiple
>   //  recon in future reconServerCount should be the real number of recon
>   int reconServerCount = 1;
>   int totalServerCount = reconServerCount;
>   try {
> totalServerCount += HddsUtils.getSCMAddresses(conf).size();
>   } catch (Exception e) {
> LOG.error("Fail to get scm addresses", e);
>   }
>   return totalServerCount;
> }
> {code}
> meanwhile, current Recon has some performance issue, after running for hours, 
> it became slower and slower, and crashed due to OOM. 
> 2) The communication between DN and Recon will soon exhaust all the threads 
> in DatanodeStateMachine.executorService, there will be no available threads 
> for DN to talk SCM. 
> 3) all DNs become stale/dead at SCM side.
>  
> *The fix is quite straightforward:*
> Each EndpointStateMachine uses its own thread pool to talk with SCM/Recon, a 
> slow Recon won't interfere the communication between DN and SCM, or vice 
> versa.
>  
> *P.S.*
> The first edition for DatanodeStateMachine.executorService is a cached thread 
> pool, if there exists a slow SCM/Recon, more and more threads will be 
> created, and DN will OOM eventually, due to tens of thousands of threads are 
> created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #1518: HDDS-4386: Each EndpointStateMachine uses its own thread pool to talk with SCM/Recon

2020-10-27 Thread GitBox


avijayanhwx merged pull request #1518:
URL: https://github.com/apache/hadoop-ozone/pull/1518


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4355) Deleted container is marked as missing on recon UI

2020-10-27 Thread Aravindan Vijayan (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221397#comment-17221397
 ] 

Aravindan Vijayan commented on HDDS-4355:
-

[~Sammi] Yes, this is a known limitation in Recon currently.

> Deleted container is marked as missing on recon UI
> --
>
> Key: HDDS-4355
> URL: https://issues.apache.org/jira/browse/HDDS-4355
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Priority: Major
> Attachments: screenshot-1.png
>
>
> {noformat}
>  ~/ozoneenv/ozone]$ bin/ozone admin container info 104825
> Container id: 104825
> Pipeline id: 10955a24-2047-416f-85ac-94523cfe8d40
> Container State: DELETED
> Datanodes: []
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4355) Deleted container is marked as missing on recon UI

2020-10-27 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-4355:
---

Assignee: Aravindan Vijayan

> Deleted container is marked as missing on recon UI
> --
>
> Key: HDDS-4355
> URL: https://issues.apache.org/jira/browse/HDDS-4355
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Sammi Chen
>Assignee: Aravindan Vijayan
>Priority: Major
> Attachments: screenshot-1.png
>
>
> {noformat}
>  ~/ozoneenv/ozone]$ bin/ozone admin container info 104825
> Container id: 104825
> Pipeline id: 10955a24-2047-416f-85ac-94523cfe8d40
> Container State: DELETED
> Datanodes: []
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4385) It would be nice if there is a search function using container ID on Recon Missing Container page

2020-10-27 Thread Aravindan Vijayan (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221390#comment-17221390
 ] 

Aravindan Vijayan commented on HDDS-4385:
-

[~vivekratnavel] Assigning this to you since it is a UI request. 

> It would be nice if there is a search function using container ID on Recon 
> Missing Container page
> -
>
> Key: HDDS-4385
> URL: https://issues.apache.org/jira/browse/HDDS-4385
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Recon
>Reporter: Sammi Chen
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: image-2020-10-23-12-08-12-705.png
>
>
> In production cluster, there can be many missing containers to investigate.   
> It would be nice to have a search filer using Container ID.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4385) It would be nice if there is a search function using container ID on Recon Missing Container page

2020-10-27 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-4385:
---

Assignee: Vivek Ratnavel Subramanian

> It would be nice if there is a search function using container ID on Recon 
> Missing Container page
> -
>
> Key: HDDS-4385
> URL: https://issues.apache.org/jira/browse/HDDS-4385
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Recon
>Reporter: Sammi Chen
>Assignee: Vivek Ratnavel Subramanian
>Priority: Major
> Attachments: image-2020-10-23-12-08-12-705.png
>
>
> In production cluster, there can be many missing containers to investigate.   
> It would be nice to have a search filer using Container ID.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sadanand48 commented on a change in pull request #1507: HDDS-4307.Start Trash Emptier in Ozone Manager

2020-10-27 Thread GitBox


sadanand48 commented on a change in pull request #1507:
URL: https://github.com/apache/hadoop-ozone/pull/1507#discussion_r512567525



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1228,17 +1238,57 @@ public void restart() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-
 omRpcServer.start();
+
 isOmRpcServerRunning = true;
 
+startTrashEmptier(configuration);
+
 registerMXBean();
 
 startJVMPauseMonitor();
 setStartTime();
 omState = State.RUNNING;
   }
 
+
+  /**
+   * @param conf
+   * @throws IOException
+   * Starts a Trash Emptier thread that does an fs.trashRoots and performs
+   * checkpointing & deletion
+   */
+  private void startTrashEmptier(Configuration conf) throws IOException {
+long trashInterval =
+conf.getLong(FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT);
+if (trashInterval == 0) {
+  return;

Review comment:
   Done.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1228,17 +1238,57 @@ public void restart() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-
 omRpcServer.start();
+
 isOmRpcServerRunning = true;
 
+startTrashEmptier(configuration);
+
 registerMXBean();
 
 startJVMPauseMonitor();
 setStartTime();
 omState = State.RUNNING;
   }
 
+
+  /**
+   * @param conf
+   * @throws IOException
+   * Starts a Trash Emptier thread that does an fs.trashRoots and performs

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3959) Avoid HddsProtos.PipelineID#toString

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3959:
-
Labels: pull-request-available  (was: )

> Avoid HddsProtos.PipelineID#toString
> 
>
> Key: HDDS-3959
> URL: https://issues.apache.org/jira/browse/HDDS-3959
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> {{PipelineID}} was recently changed to have integer-based ID in addition to 
> the string ID.  Now log messages including {{PipelineID}} span multiple lines:
> {code:title=https://github.com/elek/ozone-build-results/blob/92d31c9b58065b37a371c71c97b346f99163318d/2020/07/11/1626/acceptance/docker-ozone-ozone-freon-scm.log#L218-L223}
> datanode_1  | 2020-07-11 13:07:00,540 [Command processor thread] INFO 
> commandhandler.CreatePipelineCommandHandler: Created Pipeline RATIS ONE #id: 
> "8101dcbf-1a28-4f20-863a-0616b4e4bc4b"
> datanode_1  | uuid128 {
> datanode_1  |   mostSigBits: -9150790254504423648
> datanode_1  |   leastSigBits: -8774694229384053685
> datanode_1  | }
> datanode_1  | .
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1525: HDDS-3959. Avoid HddsProtos.PipelineID#toString

2020-10-27 Thread GitBox


adoroszlai opened a new pull request #1525:
URL: https://github.com/apache/hadoop-ozone/pull/1525


   ## What changes were proposed in this pull request?
   
   Change `CreatePipelineCommandHandler` and `ClosePipelineCommandHandler` to 
use the non-proto `PipelineID` object for logging.  This lets us avoid printing 
multi-line messages, mostly caused by `uuid128` structure.
   
   Also tweak logic a bit to avoid unnecessary back and forth proto conversion 
for getting the node list in pipeline creation.
   
   https://issues.apache.org/jira/browse/HDDS-3959
   
   ## How was this patch tested?
   
   Ran `ozone` docker compose cluster, closed one pipeline manually, verified 
log messages:
   
   ```
   datanode_1  | 2020-10-27 09:58:13,533 [Command processor thread] INFO 
commandhandler.CreatePipelineCommandHandler: Created Pipeline RATIS THREE 
PipelineID=658bfc91-c07c-405f-94ab-7b7a87c9dd2c.
   ...
   datanode_2  | 2020-10-27 10:02:00,686 [Command processor thread] INFO 
commandhandler.ClosePipelineCommandHandler: Close Pipeline 
PipelineID=658bfc91-c07c-405f-94ab-7b7a87c9dd2c command on datanode 
fea867c0-2e70-45ec-8491-be7f952d91a4.
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1489: HDDS-4308. Fix issue with quota update

2020-10-27 Thread GitBox


captainzmc commented on pull request #1489:
URL: https://github.com/apache/hadoop-ozone/pull/1489#issuecomment-717123932


   Hi @linyiqun,I modified the implementation based on the latest comments. Can 
you help to review this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1507: HDDS-4307.Start Trash Emptier in Ozone Manager

2020-10-27 Thread GitBox


rakeshadr commented on a change in pull request #1507:
URL: https://github.com/apache/hadoop-ozone/pull/1507#discussion_r512525048



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1228,17 +1238,57 @@ public void restart() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-
 omRpcServer.start();
+
 isOmRpcServerRunning = true;
 
+startTrashEmptier(configuration);
+
 registerMXBean();
 
 startJVMPauseMonitor();
 setStartTime();
 omState = State.RUNNING;
   }
 
+
+  /**
+   * @param conf
+   * @throws IOException
+   * Starts a Trash Emptier thread that does an fs.trashRoots and performs

Review comment:
   Please follow general guidelines for the javadoc.
   1) Begins with function details.
   2) Provide `@param` details.
   3) Then `@return` info.
   4) Ending with `@throws` exception cases.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1507: HDDS-4307.Start Trash Emptier in Ozone Manager

2020-10-27 Thread GitBox


rakeshadr commented on a change in pull request #1507:
URL: https://github.com/apache/hadoop-ozone/pull/1507#discussion_r512525048



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1228,17 +1238,57 @@ public void restart() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-
 omRpcServer.start();
+
 isOmRpcServerRunning = true;
 
+startTrashEmptier(configuration);
+
 registerMXBean();
 
 startJVMPauseMonitor();
 setStartTime();
 omState = State.RUNNING;
   }
 
+
+  /**
+   * @param conf
+   * @throws IOException
+   * Starts a Trash Emptier thread that does an fs.trashRoots and performs

Review comment:
   Please follow general guidelines for the javadoc.
   1) Begins with function details.
   2) Provide `@param` details.
   3) Then `@throws` exception cases.
   4) Ending with `@return` info.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1228,17 +1238,57 @@ public void restart() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-
 omRpcServer.start();
+
 isOmRpcServerRunning = true;
 
+startTrashEmptier(configuration);
+
 registerMXBean();
 
 startJVMPauseMonitor();
 setStartTime();
 omState = State.RUNNING;
   }
 
+
+  /**
+   * @param conf
+   * @throws IOException
+   * Starts a Trash Emptier thread that does an fs.trashRoots and performs
+   * checkpointing & deletion
+   */
+  private void startTrashEmptier(Configuration conf) throws IOException {
+long trashInterval =
+conf.getLong(FS_TRASH_INTERVAL_KEY, FS_TRASH_INTERVAL_DEFAULT);
+if (trashInterval == 0) {
+  return;

Review comment:
   Please add a warn or even  a lighter info log message to make the 
behavior loud to the users as this will disable trash emptier.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc edited a comment on pull request #1489: HDDS-4308. Fix issue with quota update

2020-10-27 Thread GitBox


captainzmc edited a comment on pull request #1489:
URL: https://github.com/apache/hadoop-ozone/pull/1489#issuecomment-708182129


   Hi @bharatviswa504, Can you help to review this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4320) Let Ozone input streams implement CanUnbuffer

2020-10-27 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4320:
---
Status: Patch Available  (was: In Progress)

> Let Ozone input streams implement CanUnbuffer
> -
>
> Key: HDDS-4320
> URL: https://issues.apache.org/jira/browse/HDDS-4320
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Implement Hadoop's {{CanUnbuffer}} interface in {{OzoneFSInputStream}} and 
> the underlying other input streams.  Note: {{CanUnbuffer}} is available in 
> 2.7 (HDFS-7694), but {{StreamCapabilities#UNBUFFER}} is new to 2.9.1 
> (HADOOP-15012).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4344) Block Deletion Performance Improvements

2020-10-27 Thread Lokesh Jain (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221206#comment-17221206
 ] 

Lokesh Jain commented on HDDS-4344:
---

HDDS-4297 gives better deletion speed for deleting a milllion keys.
Without PR - SCM sends total 180 blocks for deletion in a minute. This number 
does not increase even after increasing configs. Datanode deletion speed was 
very slow. I monitored a datanode disk space and it deleted less than a GB even 
after 5 minutes.
With PR - SCM sends 6000 blocks for deletion. Datanode was deleting 1 GB of 
data every minute.

I used the following configs.
ozone.block.deleting.container.limit.per.interval: "1000"
ozone.key.deleting.limit.per.task: "1"

> Block Deletion Performance Improvements
> ---
>
> Key: HDDS-4344
> URL: https://issues.apache.org/jira/browse/HDDS-4344
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Attachments: Block Deletion Performance.pdf
>
>
> In cluster deployments it was observed that block deletion can be slow. For 
> example if a user writes a million keys in Ozone, the time it takes for those 
> million keys to be deleted from datanodes can be high. The jira would cover 
> various improvements which can be made for better deletion speeds.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4388) Make writeStateMachineTimeout retry count proportional to node failure timeout

2020-10-27 Thread Lokesh Jain (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lokesh Jain resolved HDDS-4388.
---
Resolution: Fixed

> Make writeStateMachineTimeout retry count proportional to node failure timeout
> --
>
> Key: HDDS-4388
> URL: https://issues.apache.org/jira/browse/HDDS-4388
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Currently, in ratis "writeStateMachinecall" gets retried indefinitely in 
> event of a timeout. In case, where disks are slow/overloaded or number of 
> chunk writer threads are not available for a period of 10s, writeStateMachine 
> call times out in 10s. In cases like these, the same write chunk keeps on 
> getting retried causing the same chunk of data to be overwritten. The idea 
> here is to abort the request once the node failure timeout reaches.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1519: HDDS-4388. Make writeStateMachineTimeout retry count proportional to node failure timeout

2020-10-27 Thread GitBox


lokeshj1703 commented on pull request #1519:
URL: https://github.com/apache/hadoop-ozone/pull/1519#issuecomment-717039024


   @bshashikant Thanks for the contribution! I have merged the PR to master 
branch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lokeshj1703 closed pull request #1519: HDDS-4388. Make writeStateMachineTimeout retry count proportional to node failure timeout

2020-10-27 Thread GitBox


lokeshj1703 closed pull request #1519:
URL: https://github.com/apache/hadoop-ozone/pull/1519


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-27 Thread GitBox


rakeshadr commented on pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#issuecomment-717036194


   > Thanks for updating the PR, @rakeshadr . One further review comment below.
   > 
   > In additional, current test change not fully cover the test for 
listStatusV1. Example, current OzoneFS test change doesn't address the case for 
listStatus with other startKey specified.
   > I see the test class TestKeyManagerImpl does the good coverage for 
listStatus call, can we add a test unit like that or make a minor refactor 
based on that?
   
   Thanks @linyiqun . Thats really a good point. I also looked at 
keyManagerImpl UT and it requires some refactoring effort to modify the 
createDir and other logic. I will work on and update you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-27 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r512452877



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2276,318 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+// unsorted OMKeyInfo list contains combine results from TableCache and DB.
+List fileStatusFinalList = new ArrayList<>();
+LinkedHashSet fileStatusList = new LinkedHashSet<>();
+if (numEntries <= 0) {
+  return fileStatusFinalList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+String seekFileInDB;
+String seekDirInDB;
+long prefixKeyInDB;
+String prefixPath = keyName;
+
+int countEntries = 0;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// Not required to search in DeletedTable because all the deleted
+// keys will be marked directly in dirTable or in keyTable by
+// breaking the pointer to its sub-dirs. So, there is no issue of
+// inconsistency.
+
+/*
+ * keyName is a directory.
+ * Say, "/a" is the dir name and its objectID is 1024, then seek
+ * will be doing with "1024/" to get all immediate descendants.
+ */
+if (fileStatus.getKeyInfo() != null) {
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  // list root directory.
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+seekFileInDB = metadataManager.getOzonePathKey(prefixKeyInDB, "");
+seekDirInDB = metadataManager.getOzonePathKey(prefixKeyInDB, "");
+
+// TODO: recursive flag=true will be handled in HDDS-4360 jira.
+// Order of seek -> (1)Seek dirs in dirTable (2)Seek files in fileTable
+// 1. Seek the given key in key table.
+countEntries = getFilesFromDirectory(fileStatusList, seekFileInDB,
+prefixPath, prefixKeyInDB, startKey, countEntries, numEntries);
+// 2. Seek the given key in dir table.
+getDirectories(recursive, startKey, numEntries, fileStatusList,
+volumeName, bucketName, seekDirInDB, prefixKeyInDB,
+prefixPath, countEntries);
+  } else {
+/*
+ * startKey will be used in iterator seek and sets the beginning point
+ * for key traversal.
+ *
+ * key name will be used as parentID where the user has requested to
+ * list the keys from.
+ *
+ * When recursive flag=false, parentID won't change between two pages.
+ * For example: OM has a namespace like,
+ */a/1...1M files and /a/b/1...1M files.
+ */a/1...1M directories and /a/b/1...1M directories.
+ * Listing "/a", will always have the parentID as "a" irrespective of
+ * the startKey value.
+ */
+// TODO: recursive flag=true will be handled in HDDS-4360 jira.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null, true);
+
+if (fileStatusInfo != null) {
+  prefixKeyInDB = fileStatusInfo.getKeyInfo().getParentObjectID();
+  if(fileStatusInfo.isDirectory()){
+seekDirInDB = metadataManager.getOzonePathKey(prefixKeyInDB,
+fileStatusInfo.getKeyInfo().getFileName());
+
+// Order of seek -> (1) Seek dirs in dirTable. In OM, always the
+// order of search is, first seek into fileTable and then dirTable.
+// So, its not required to search again into the fileTable.
+
+// Seek the given key in dirTable.
+getDirectories(recursive, startKey, numEntries,
+fileStatusList, volumeName, bucketName, seekDirInDB,
+prefixKeyInDB, prefixPath, countEntries);
+
+  } else {
+seekFileInDB = metadataManager.getOzonePathKey(prefixKeyInDB,

Review comment:
   As we have to perform seek in two tables to finish the listing, I'm 
maintaining a seek order 
   1) Seek all the files from 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-27 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r512452877



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2276,318 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+// unsorted OMKeyInfo list contains combine results from TableCache and DB.
+List fileStatusFinalList = new ArrayList<>();
+LinkedHashSet fileStatusList = new LinkedHashSet<>();
+if (numEntries <= 0) {
+  return fileStatusFinalList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+String seekFileInDB;
+String seekDirInDB;
+long prefixKeyInDB;
+String prefixPath = keyName;
+
+int countEntries = 0;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// Not required to search in DeletedTable because all the deleted
+// keys will be marked directly in dirTable or in keyTable by
+// breaking the pointer to its sub-dirs. So, there is no issue of
+// inconsistency.
+
+/*
+ * keyName is a directory.
+ * Say, "/a" is the dir name and its objectID is 1024, then seek
+ * will be doing with "1024/" to get all immediate descendants.
+ */
+if (fileStatus.getKeyInfo() != null) {
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  // list root directory.
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+seekFileInDB = metadataManager.getOzonePathKey(prefixKeyInDB, "");
+seekDirInDB = metadataManager.getOzonePathKey(prefixKeyInDB, "");
+
+// TODO: recursive flag=true will be handled in HDDS-4360 jira.
+// Order of seek -> (1)Seek dirs in dirTable (2)Seek files in fileTable
+// 1. Seek the given key in key table.
+countEntries = getFilesFromDirectory(fileStatusList, seekFileInDB,
+prefixPath, prefixKeyInDB, startKey, countEntries, numEntries);
+// 2. Seek the given key in dir table.
+getDirectories(recursive, startKey, numEntries, fileStatusList,
+volumeName, bucketName, seekDirInDB, prefixKeyInDB,
+prefixPath, countEntries);
+  } else {
+/*
+ * startKey will be used in iterator seek and sets the beginning point
+ * for key traversal.
+ *
+ * key name will be used as parentID where the user has requested to
+ * list the keys from.
+ *
+ * When recursive flag=false, parentID won't change between two pages.
+ * For example: OM has a namespace like,
+ */a/1...1M files and /a/b/1...1M files.
+ */a/1...1M directories and /a/b/1...1M directories.
+ * Listing "/a", will always have the parentID as "a" irrespective of
+ * the startKey value.
+ */
+// TODO: recursive flag=true will be handled in HDDS-4360 jira.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null, true);
+
+if (fileStatusInfo != null) {
+  prefixKeyInDB = fileStatusInfo.getKeyInfo().getParentObjectID();
+  if(fileStatusInfo.isDirectory()){
+seekDirInDB = metadataManager.getOzonePathKey(prefixKeyInDB,
+fileStatusInfo.getKeyInfo().getFileName());
+
+// Order of seek -> (1) Seek dirs in dirTable. In OM, always the
+// order of search is, first seek into fileTable and then dirTable.
+// So, its not required to search again into the fileTable.
+
+// Seek the given key in dirTable.
+getDirectories(recursive, startKey, numEntries,
+fileStatusList, volumeName, bucketName, seekDirInDB,
+prefixKeyInDB, prefixPath, countEntries);
+
+  } else {
+seekFileInDB = metadataManager.getOzonePathKey(prefixKeyInDB,

Review comment:
   As we have to perform seek in two tables to finish the listing, I'm 
maintaining a seek order 
   1) Finish listing all the files 

[jira] [Updated] (HDDS-4258) Set GDPR to a Security submenu in EN and CN document.

2020-10-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4258:
-
Labels: newbie pull-request-available  (was: newbie)

> Set GDPR to a Security submenu in EN and CN document.
> -
>
> Key: HDDS-4258
> URL: https://issues.apache.org/jira/browse/HDDS-4258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Zheng Huang-Mu
>Assignee: François Risch
>Priority: Minor
>  Labels: newbie, pull-request-available
>
> Base on [~xyao] comment on HDDS-4156.
> https://github.com/apache/hadoop-ozone/pull/1368#issuecomment-694532324
> Set GDPR to a Security submenu in EN and CN document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] frischHWC opened a new pull request #1524: HDDS-4258.Set GDPR to a Security submenu in EN and CN document

2020-10-27 Thread GitBox


frischHWC opened a new pull request #1524:
URL: https://github.com/apache/hadoop-ozone/pull/1524


   ## What changes were proposed in this pull request?
   
   Setting GDPR to security submenu for EN & CN pages.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4258
   
   ## How was this patch tested?
   
   Locally with 'hugo serve', see attached screenshot:
   
   https://user-images.githubusercontent.com/47358141/97264554-7aead900-1825-11eb-9a6e-01e940783511.png;>
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4258) Set GDPR to a Security submenu in EN and CN document.

2020-10-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDDS-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

François Risch reassigned HDDS-4258:


Assignee: François Risch

> Set GDPR to a Security submenu in EN and CN document.
> -
>
> Key: HDDS-4258
> URL: https://issues.apache.org/jira/browse/HDDS-4258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Zheng Huang-Mu
>Assignee: François Risch
>Priority: Minor
>  Labels: newbie
>
> Base on [~xyao] comment on HDDS-4156.
> https://github.com/apache/hadoop-ozone/pull/1368#issuecomment-694532324
> Set GDPR to a Security submenu in EN and CN document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4191) Add failover proxy for SCM container client

2020-10-27 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-4191:
---
Status: Patch Available  (was: In Progress)

https://github.com/apache/hadoop-ozone/pull/1514

> Add failover proxy for SCM container client
> ---
>
> Key: HDDS-4191
> URL: https://issues.apache.org/jira/browse/HDDS-4191
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>
> Take advantage of failover proxy in HDDS-3188 and have failover proxy for SCM 
> container client as well



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4393) Fix CI and test failures after force push on 2020/10/26

2020-10-27 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17221168#comment-17221168
 ] 

Li Cheng commented on HDDS-4393:


[https://github.com/apache/hadoop-ozone/pull/1522] shows the current feature 
branch HDDS-2823 has issues in CI.

> Fix CI and test failures after force push on 2020/10/26
> ---
>
> Key: HDDS-4393
> URL: https://issues.apache.org/jira/browse/HDDS-4393
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM HA
>Reporter: Li Cheng
>Priority: Blocker
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org