[GitHub] [hadoop-ozone] captainzmc commented on pull request #1431: HDDS-4254. Bucket space: add usedBytes and update it when create and delete key.

2020-09-22 Thread GitBox


captainzmc commented on pull request #1431:
URL: https://github.com/apache/hadoop-ozone/pull/1431#issuecomment-697133328


   Thanks @ChenSammi for the review. Review issues has been fixed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


captainzmc commented on pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#issuecomment-697133030


   Thanks @ChenSammi for the review. Review issues has been fixed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1272: HDDS-2660. Create insight point for datanode container protocol

2020-09-22 Thread GitBox


adoroszlai commented on pull request #1272:
URL: https://github.com/apache/hadoop-ozone/pull/1272#issuecomment-696662025


   > > Thanks @elek for updating the patch. Interestingly log -v stopped 
working, even if I execute log first (as mentioned previously).
   > 
   > I double-checked and it worked for me. When you use leader, the messages 
are displayed immediately, when you use follower, the messages will be appeared 
only after the commit...
   
   It seems to depend on the content.  Plain text files work fine, but it stops 
working on the first binary, eg. `ozone freon ockg -n1 -t1`.  I guess it's 
caused by control chars in the random data.
   
   I think we should avoid logging chunk content.  
`ContainerCommandRequestMessage` implements related logic to clear data.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


fapifta commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492702681



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}
+
+waitForAllTxnsApplied(omRatisServer.getOmStateMachine(),
+omRatisServer.getRaftGroup(),
+(RaftServerProxy) omRatisServer.getServer(),
+TimeUnit.MINUTES.toSeconds(5));

Review comment:
   Are you sure we want to add a configuration for this one? I would argue 
we do not need one more configurable thing to this one at least.
   prepareForUpgrade is a special startup type of OM, during which it applies 
all transactions that are in the raft log.
   If 5 minutes is not enough to apply all transactions in the raft log, then 
the process will shut down and let the user know that some of the transactions 
were not applied, so that the user can start the process again as a last 
resort, to apply further transactions. If we assume that at least a few 
transactions are applied sooner or later the user can get to a state where 
everything is applied, and if none of the transactions can be applied within 5 
minutes, that sounds like a serious problem anyways, independently from the 
upgrade.
   
   Also in 5 minutes I would expect in all cases that the unapplied 
transactions can be applied, as the number of this kind of transactions should 
not be too much as far as I know, or if it is then the system is not healthy 
anyway.
   
   Can you please elaborate, why would it be useful to make this configurable?

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1179,15 +1229,22 @@ public void start() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-omRpcServer.start();
-isOmRpcServerRunning = true;
 
+if (!prepareForUpgrade) {
+  omRpcServer.start();
+  isOmRpcServerRunning = true;
+}

Review comment:
   As we discussed this with @avijayanhwx during internal design 
discussions, after OM is started in prepareForUpgrade mode, it will tear down, 
when the last transaction is applied from the raft log, and a snapshot is taken 
in raft, so with that the OM reached a state when all transactions are applied 
and none needs to be applied after the next startup.
   
   This is to ensure that all the transactions are applied with the code that 
was there when the transactions arrived in, so with that we can ensure 
consistency of the state of different OM instances.
   
   After this is finished, and OM tear down from prepareForUpgrade, one will 
need a normal startup of OM to bring it up again, and at that time the RPC 
server will start properly.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
##
@@ -98,6 +98,28 @@ public void initOm()
 }
   }
 
+
+  /**
+   * This function implements a sub-command to allow the OM to be
+   * "prepared for upgrade".
+   */
+  @CommandLine.Command(name = "--prepareForUpgrade",
+  aliases = {"--prepareForDowngrade", "--flushTransactions"},

Review comment:
   This command should be issued when the OM is already stopped before the 
upgrade of software bits. This is a command that starts up the OM code in a 
special way, with that it can start up only the current local OM, as I 
understand.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on pull request #1425: HDDS-2981 Add unit tests for Proto [de]serialization

2020-09-22 Thread GitBox


fapifta commented on pull request #1425:
URL: https://github.com/apache/hadoop-ozone/pull/1425#issuecomment-696680680


   Hello @llemec 
   
   thank you for your comments and continued work on this PR. Indeed, I can 
accept the way based on your argument.
   
   +1(non-binding) to merge the changes. Let's wait a committer to get it 
reviewed once more and committed if no further comments ;)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


linyiqun commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696588382


   > > Hi @rakeshadr , some initial review comments below.
   > > In additional, one question from me: this is the first task of dir 
cache, there will be other further subtasks. But this part of work depends on 
[HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) be completed. So 
that means [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) tasks 
cannot be merged immediately and be blocked for a long time? How do we plan to 
coordinate with [HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) 
and [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) feature 
development works?
   > 
   > Good comment. yes, cache should be integrated eventually(case-by-case) to 
get full benefit. But 
[HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) can be upstreamed 
separately and not blocked, I feel.
   > 
   > Cache can be integrated once 
[HDDS-2949](https://issues.apache.org/jira/browse/HDDS-2949) is finished. 
During dir creation, it checks the 
[DIR_EXISTS](https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java#L185)
 and cache can be integrated into this call path.
   > 
   > Later, while implementing File, lookups tasks, delete, rename etc will 
integrate into that areas.
   
   Get it, sounds good to me.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492877283



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}
+
+waitForAllTxnsApplied(omRatisServer.getOmStateMachine(),
+omRatisServer.getRaftGroup(),
+(RaftServerProxy) omRatisServer.getServer(),
+TimeUnit.MINUTES.toSeconds(5));

Review comment:
   Thanks, will change it to a variable.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
##
@@ -98,6 +98,28 @@ public void initOm()
 }
   }
 
+
+  /**
+   * This function implements a sub-command to allow the OM to be
+   * "prepared for upgrade".
+   */
+  @CommandLine.Command(name = "--prepareForUpgrade",
+  aliases = {"--prepareForDowngrade", "--flushTransactions"},

Review comment:
   +1 to @fapifta's reply.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}

Review comment:
   Maybe not. In the Ratis StateMachineUpdater, the takeSnapshot() method 
uses this config to  purge logs immediately after taking a state machine 
snapshot. Hence, I thought that it was good to have a check in place to make 
sure no one changes the config from within. I am ok with removing it for now, 
and handling snapshot + log purge in HDDS-4268 as a follow up patch. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on pull request #1175: HDDS-2766. security/SecuringDataNodes.md

2020-09-22 Thread GitBox


cxorm commented on pull request #1175:
URL: https://github.com/apache/hadoop-ozone/pull/1175#issuecomment-696701388


   Thanks @iamabug for the work, 
   Overall looks great to me (including the fixes), 
   I would committed it If all CI checks passed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


linyiqun commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492562181



##
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##
@@ -2521,4 +2521,32 @@
   filesystem semantics.
 
   
+
+  
+ozone.om.metadata.cache.directory

Review comment:
   This name also needs to be updated, current unit test was broken by this
   > 
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml:493
 class org.apache.hadoop.ozone.OzoneConfigKeys class 
org.apache.hadoop.hdds.scm.ScmConfigKeys class 
org.apache.hadoop.ozone.om.OMConfigKeys class 
org.apache.hadoop.hdds.HddsConfigKeys class 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys class 
org.apache.hadoop.ozone.s3.S3GatewayConfigKeys class 
org.apache.hadoop.hdds.scm.server.SCMHTTPServerConfig has 1 variables missing 
in ozone-default.xml Entries:   ozone.om.metadata.cache.directory.policy 
expected:<0> but was:<1>
   [ERROR]   
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass

##
File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(10);
+
+  @Before
+  public void setup() {
+//initialize config
+conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+//1. Verify disabling cache
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+CachePolicy.DIR_NOCACHE.getPolicy());
+CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+dirCacheStore.getCachePolicy());
+
+//2. Invalid cache policy
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+//3. Directory LRU cache policy
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+CachePolicy.DIR_LRU.getPolicy());
+conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+File testDir = GenericTestUtils.getRandomizedTestDir();
+conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+testDir.toString());
+
+omMetadataManager = new 

[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


rakeshadr commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-696839280







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] github-actions[bot] closed pull request #1110: HDDS-3843. Throw the specific exception other than NPE.

2020-09-22 Thread GitBox


github-actions[bot] closed pull request #1110:
URL: https://github.com/apache/hadoop-ozone/pull/1110


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-541) Ozone Quota support.

2020-09-22 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200538#comment-17200538
 ] 

Rui Wang commented on HDDS-541:
---

Thanks [~micahzhao]!

I will start from the NameSpace jira.

> Ozone Quota support.
> 
>
> Key: HDDS-541
> URL: https://issues.apache.org/jira/browse/HDDS-541
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Namit Maheshwari
>Assignee: mingchao zhao
>Priority: Major
>  Labels: Triaged
>  Time Spent: 96h
>  Remaining Estimate: 120h
>
> Create a volume with just 1 MB as quota
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
> --quota=1MB --user=root /hive
> 2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as 
> owner and quota set to 1048576 bytes.
> {code}
> Now create a bucket and put a big key greater than 1MB in the bucket
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
> /hive/bucket1
> 2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
> Versioning false and Storage Type set to DISK
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> -rw-r--r-- 1 root root 165903437 Sep 21 13:16 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> volume/bucket/key name required in putKey
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
> "modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
> "size" : 165903437,
> "keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
> "keyLocations" : [ {
> "containerID" : 2,
> "localID" : 100772661343420416,
> "length" : 134217728,
> "offset" : 0
> }, {
> "containerID" : 3,
> "localID" : 100772661661007873,
> "length" : 31685709,
> "offset" : 0
> } ]
> }{code}
> It was able to put a 165 MB file on a volume with just 1MB quota.
>  
> Currently Ozone haven't support Quota, So I think this should be a new 
> feature .
>  The design document can be referred to the attachment. ([design google 
> docs|https://docs.google.com/document/d/1ohbGn5N7FN6OD15xMShHH2SrtZRYx0-zUf9vjatn_OM/edit])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx edited a comment on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx edited a comment on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-696839280







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] github-actions[bot] commented on pull request #1110: HDDS-3843. Throw the specific exception other than NPE.

2020-09-22 Thread GitBox


github-actions[bot] commented on pull request #1110:
URL: https://github.com/apache/hadoop-ozone/pull/1110#issuecomment-696453432


   Thank you very much for the patch. I am closing this PR __temporarily__ as 
there was no activity recently and it is waiting for response from its author.
   
   It doesn't mean that this PR is not important or ignored: feel free to 
reopen the PR at any time.
   
   It only means that attention of committers is not required. We prefer to 
keep the review queue clean. This ensures PRs in need of review are more 
visible, which results in faster feedback for all PRs.
   
   If you need ANY help to finish this PR, please [contact the 
community](https://github.com/apache/hadoop-ozone#contact) on the mailing list 
or the slack channel."



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


fapifta commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-696692461


   Hi @avijayanhwx 
   
   the initial changes look good to me thank you for sharing the wip state, and 
it seems to be a good direction so far.
   Maybe one minor comment from me as well, are we sure we want to add 
--prepareForDowngrade as an option alias? It suggests that we might support 
downgrade, and I fear that might cause some misunderstandings.
   I am unsure whether we can use the same functionality to get back to an 
older version in all scenarios either.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr edited a comment on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


rakeshadr edited a comment on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!
   Updated another commit addressing the comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


linyiqun commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492798607



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}
+
+waitForAllTxnsApplied(omRatisServer.getOmStateMachine(),
+omRatisServer.getRaftGroup(),
+(RaftServerProxy) omRatisServer.getServer(),
+TimeUnit.MINUTES.toSeconds(5));

Review comment:
   >Also in 5 minutes I would expect in all cases that the unapplied 
transactions can be applied, as the number of this kind of transactions should 
not be too much as far as I know, or if it is then the system is not healthy 
anyway.
   
   I'm okay to let 5 mins as current threshold wait time, only one minor 
comment: can we define a variable for this time value rather hard-coded in 
method here? 
   

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1179,15 +1229,22 @@ public void start() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-omRpcServer.start();
-isOmRpcServerRunning = true;
 
+if (!prepareForUpgrade) {
+  omRpcServer.start();
+  isOmRpcServerRunning = true;
+}

Review comment:
   Okay, enable RPC server via next startup makes sense to me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


hanishakoneru commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492845659



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}

Review comment:
   OMRatisServer always sets this property to true. It is not configurable. 
Is this check still needed?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-22 Thread GitBox


bshashikant commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r488543079



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {
+suggestedLeaderCount.put(dn, suggestedLeaderCount.get(dn) + 1);
+  }
+} catch (PipelineNotFoundException e) {
+  LOG.debug("Pipeline not found in pipeline state manager : {}",
+  pipelineID, e);
+}
+  }
+}
+
+return suggestedLeaderCount;
+  }
+
+  private DatanodeDetails getSuggestedLeader(List dns) {
+Map suggestedLeaderCount =

Review comment:
   I think suggested leader selection can be made a policy driven change.
   1) default policy can be Min leader election count
   2) It can also be driven by factors like memory/resource availability on a 
datanode
   3) Can be determined by the topology as well. The node nearest to the client 
can be made the leader .
   
   Its better to make it a pluggable model like this.

##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestRatisPipelineCreateAndDestroy.java
##
@@ -92,11 +98,128 @@ public void 
testAutomaticPipelineCreationOnPipelineDestroy()
 waitForPipelines(2);
   }
 
+  private void checkLeaderBalance(int dnNum, int leaderNumOfEachDn)
+  throws Exception {
+List pipelines = pipelineManager
+.getPipelines(HddsProtos.ReplicationType.RATIS,
+HddsProtos.ReplicationFactor.THREE, Pipeline.PipelineState.OPEN);
+
+for (Pipeline pipeline : pipelines) {
+  LambdaTestUtils.await(3, 500, () ->
+  pipeline.getLeaderId().equals(pipeline.getSuggestedLeaderId()));
+}
+
+Map leaderCount = new HashMap<>();
+for (Pipeline pipeline : pipelines) {
+  UUID leader = pipeline.getLeaderId();
+  if (!leaderCount.containsKey(leader)) {
+leaderCount.put(leader, 0);
+  }
+
+  leaderCount.put(leader, leaderCount.get(leader) + 1);
+}
+
+Assert.assertTrue(leaderCount.size() == dnNum);
+for (UUID key : leaderCount.keySet()) {
+  Assert.assertTrue(leaderCount.get(key) == leaderNumOfEachDn);
+}
+  }
+
+  @Test(timeout = 36)
+  public void testRestoreSuggestedLeader() throws Exception {
+conf.setBoolean(OZONE_SCM_PIPELINE_AUTO_CREATE_FACTOR_ONE, false);
+int dnNum = 3;
+int dnPipelineLimit = 3;
+int leaderNumOfEachDn = dnPipelineLimit / dnNum;
+int pipelineNum = 3;
+
+init(dnNum, dnPipelineLimit);
+// make sure two pipelines are created
+waitForPipelines(pipelineNum);
+// No Factor ONE pipeline is auto created.
+Assert.assertEquals(0, pipelineManager.getPipelines(
+HddsProtos.ReplicationType.RATIS,
+HddsProtos.ReplicationFactor.ONE).size());
+
+// pipelineNum pipelines in 3 datanodes,
+// each datanode has leaderNumOfEachDn leaders after balance
+checkLeaderBalance(dnNum, leaderNumOfEachDn);
+List pipelinesBeforeRestart =
+cluster.getStorageContainerManager().getPipelineManager()
+.getPipelines();
+
+cluster.restartStorageContainerManager(true);
+
+checkLeaderBalance(dnNum, leaderNumOfEachDn);
+List pipelinesAfterRestart =
+cluster.getStorageContainerManager().getPipelineManager()
+.getPipelines();
+
+Assert.assertEquals(
+pipelinesBeforeRestart.size(), pipelinesAfterRestart.size());
+
+for (Pipeline p : pipelinesBeforeRestart) {
+  boolean equal = false;
+  for (Pipeline q : pipelinesAfterRestart) {
+if (p.getId().equals(q.getId())
+&& p.getSuggestedLeaderId().equals(q.getSuggestedLeaderId())) {
+  equal = true;
+}
+  }
+
+  Assert.assertTrue(equal);
+}
+  }
+
+  @Test(timeout = 36)
+  public void testPipelineLeaderBalance() throws Exception {
+conf.setBoolean(OZONE_SCM_PIPELINE_AUTO_CREATE_FACTOR_ONE, false);
+int dnNum = 3;
+int dnPipelineLimit = 3;
+int leaderNumOfEachDn = dnPipelineLimit / dnNum;
+int pipelineNum = 3;
+
+init(dnNum, dnPipelineLimit);
+// make sure two pipelines are created
+waitForPipelines(pipelineNum);
+// No Factor ONE pipeline is auto created.
+Assert.assertEquals(0, pipelineManager.getPipelines(
+   

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492480005



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OMConfigKeys.java
##
@@ -246,4 +246,15 @@ private OMConfigKeys() {
   "ozone.om.enable.filesystem.paths";
   public static final boolean OZONE_OM_ENABLE_FILESYSTEM_PATHS_DEFAULT =
   false;
+
+  public static final String OZONE_OM_CACHE_DIR_POLICY =
+  "ozone.om.metadata.cache.directory";
+  public static final String OZONE_OM_CACHE_DIR_DEFAULT = "DIR_LRU";

Review comment:
   OK, will update.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheEntity.java
##
@@ -0,0 +1,48 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Entities that are to be cached.
+ */
+public enum CacheEntity {
+
+  DIR("directory");
+  // This is extendable and one can add more entities for
+  // caching based on demand. For example, define new entities like FILE
+  // ("file"), LISTING("listing") cache etc.
+
+  CacheEntity(String entity) {
+this.entityName = entity;
+  }
+
+  private String entityName;
+
+  public String getName() {
+return entityName;
+  }
+
+  public static CacheEntity getEntity(String entityStr) {
+for (CacheEntity entity : CacheEntity.values()) {
+  if (entityStr.equalsIgnoreCase(entity.getName())) {

Review comment:
   OK, will update.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/cache/CacheStore.java
##
@@ -0,0 +1,72 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+/**
+ * Cache used for traversing path components from parent node to the leaf node.
+ * 
+ * Basically, its a write-through cache and ensures that no-stale entries in
+ * the cache.
+ * 
+ * TODO: can define specific 'CacheLoader' to handle the OM restart and
+ *   define cache loading strategies. It can be NullLoader, LazyLoader,
+ *   LevelLoader etc.
+ *
+ * @param 
+ * @param 
+ */
+public interface CacheStore
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import com.google.common.cache.Cache;
+import com.google.common.cache.CacheBuilder;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Directory LRUCache: cache directories based on LRU (Least Recently Used)
+ * cache eviction strategy, wherein if the cache size has reached the maximum
+ * allocated capacity, the least recently used objects in the cache will be
+ * evicted.
+ * 
+ * TODO: Add cache metrics - occupancy, hit, miss, evictions etc
+ */
+public class DirectoryLRUCacheStore implements CacheStore {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DirectoryLRUCacheStore.class);
+
+  // Initialises Guava based LRU cache.
+  private Cache mCache;
+
+  /**
+   * @param 

[jira] [Assigned] (HDDS-792) Use md5 hash as ETag for Ozone S3 objects

2020-09-22 Thread Rui Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Wang reassigned HDDS-792:
-

Assignee: Rui Wang

> Use md5 hash as ETag for Ozone S3 objects
> -
>
> Key: HDDS-792
> URL: https://issues.apache.org/jira/browse/HDDS-792
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Marton Elek
>Assignee: Rui Wang
>Priority: Major
>
> AWS S3 uses md5 hash of the files as ETag. 
> Not a strict requirement, but s3 tests (https://github.com/gaul/s3-tests/) 
> can not been executed without that.
> It requires to support custom key/value annotations on key objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-792) Use md5 hash as ETag for Ozone S3 objects

2020-09-22 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200509#comment-17200509
 ] 

Rui Wang commented on HDDS-792:
---

I can try to look at this one.

> Use md5 hash as ETag for Ozone S3 objects
> -
>
> Key: HDDS-792
> URL: https://issues.apache.org/jira/browse/HDDS-792
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: S3
>Reporter: Marton Elek
>Priority: Major
>
> AWS S3 uses md5 hash of the files as ETag. 
> Not a strict requirement, but s3 tests (https://github.com/gaul/s3-tests/) 
> can not been executed without that.
> It requires to support custom key/value annotations on key objects.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r492461120



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
##
@@ -66,17 +69,23 @@
* @param bucketEncryptionKey bucket encryption key name
* @param sourceVolume
* @param sourceBucket
+   * @param quotaInBytes Volume quota in bytes.

Review comment:
   Volume? 

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
##
@@ -66,17 +69,23 @@
* @param bucketEncryptionKey bucket encryption key name
* @param sourceVolume
* @param sourceBucket
+   * @param quotaInBytes Volume quota in bytes.

Review comment:
   parameter statement is incorrent.

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java
##
@@ -282,15 +285,48 @@ public boolean setOwner(String userName) throws 
IOException {
 return result;
   }
 
+  /**
+   * Clean the space quota of the volume.
+   *
+   * @throws IOException
+   */
+  public void clearSpaceQuota() throws IOException {
+OzoneVolume ozoneVolume = proxy.getVolumeDetails(name);
+Iterator bucketIter = ozoneVolume.listBuckets(null);
+while (bucketIter.hasNext()) {
+  OzoneBucket nextBucket = (OzoneBucket) bucketIter.next();
+  if(nextBucket.getQuotaInBytes() != QUOTA_RESET) {

Review comment:
   Bucket has quota while Volume doesn't,this is a common case.  

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
##
@@ -655,4 +655,15 @@ OzoneOutputStream createFile(String volumeName, String 
bucketName,
* Getter for OzoneManagerClient.
*/
   OzoneManagerProtocol getOzoneManagerClient();
+
+  /**
+   * Set Bucket Quota.
+   * @param volumeName Name of the Volume.
+   * @param bucketName Name of the Bucket.
+   * @param quotaInBytes The maximum size this volume can be used.

Review comment:
   @param order doesn't match the real parmater order. 

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/ClearSpaceQuotaOptions.java
##
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell;
+
+import picocli.CommandLine;
+
+/**
+ * Common options for 'clrquota' comands.

Review comment:
   typo  comands 

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.shell.volume;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import org.apache.hadoop.ozone.shell.SetSpaceQuotaOptions;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * Executes update volume calls.

Review comment:
   statement is stale. 

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software 

[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-22 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r492416925



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   > Plan weight for each of node as a leader when the cluster has 
thousands of nodes can be difficult.
   
   If each node has similar hardware, i.e. CPU, memory, we just plan weight as 
now, assign each node with same leader number, it is cheap and reasonable.
   
   I think the only case we need to consider is that some nodes' hardware is 
weaker than other nodes' obviously.  I think the weeker datanodes should engage 
in less pipeline than the stronger datanodes,  but ozone does not support this 
now.
   If we can support this, the maxum leader number of each datanode should be 
less or equal to ((1/3) * the pipeline number it engaged in),  and we select 
the datanode as the leader which has lowest value of (leader number / pipeline 
number it engaged in) in 3 datanodes,  this is also cheap.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   > Plan weight for each of node as a leader when the cluster has 
thousands of nodes can be difficult.
   
   If each node has similar hardware, i.e. CPU, memory, we just plan weight as 
now, assign each node with same leader number, it is cheap and reasonable.
   
   I think the only case we need to consider is that some nodes' hardware is 
weaker than other nodes' obviously.  I think the weeker datanodes should engage 
in less pipeline than the stronger datanodes,  but ozone does not support this 
now. If we can support this, the maxum leader number of each datanode should be 
less or equal to ((1/3) * the pipeline number it engaged in),  and we select 
the datanode as the leader which has lowest value of (leader number / pipeline 
number it engaged in) in 3 datanodes,  this is also cheap.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   > Plan weight for each of node as a leader when the cluster has 
thousands of nodes can be difficult.
   
   If each node has similar hardware, i.e. CPU, memory, we just plan weight as 
now, assign each node with same leader number, it is cheap and reasonable.
   
   I think the only case we need to consider is that some nodes' hardware is 
weaker than other nodes' obviously.  I think the weeker datanodes should engage 
in less pipeline than the stronger datanodes,  but ozone does not support this 
now. If we can support this, the maxum leader number of each datanode should be 
less or equal to ((1/3) * the pipeline number it engaged in),  and we select 
the datanode as the leader which has lowest value of (leader number / pipeline 
number it engaged in) in 3 datanodes,  this is also cheap.
   
   

##
File path: 

[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1431: HDDS-4254. Bucket space: add usedBytes and update it when create and delete key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1431:
URL: https://github.com/apache/hadoop-ozone/pull/1431#discussion_r492614416



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMAllocateBlockResponse.java
##
@@ -72,5 +76,10 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   the last two lines can be merged into one line.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCommitResponse.java
##
@@ -77,6 +80,11 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   same as above.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
##
@@ -99,6 +103,11 @@ protected void addToDBBatch(OMMetadataManager 
omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   the same.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
##
@@ -99,6 +103,11 @@ protected void addToDBBatch(OMMetadataManager 
omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   the same. please double check the rest files. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3208) Implement Ratis Snapshots on SCM

2020-09-22 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200503#comment-17200503
 ] 

Rui Wang commented on HDDS-3208:


OzoneManager.installSnapshotFromLeader
Download part: OzoneManager.getDBCheckpointFromLeader

> Implement Ratis Snapshots on SCM
> 
>
> Key: HDDS-3208
> URL: https://issues.apache.org/jira/browse/HDDS-3208
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#discussion_r492526790



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
##
@@ -229,7 +229,9 @@ public String toString() {
 
 NOT_SUPPORTED_OPERATION,
 
-PARTIAL_RENAME
+PARTIAL_RENAME,
+
+QUOTA_CHECK_ERROR

Review comment:
QUOTA_CHECK_ERROR  -> QUOTA_EXCEED

##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
##
@@ -229,7 +229,9 @@ public String toString() {
 
 NOT_SUPPORTED_OPERATION,
 
-PARTIAL_RENAME
+PARTIAL_RENAME,
+
+QUOTA_CHECK_ERROR

Review comment:
QUOTA_CHECK_ERROR  -> QUOTA_EXCEEDED

##
File path: hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
##
@@ -314,6 +314,8 @@ enum Status {
 
 PARTIAL_RENAME = 65;
 
+QUOTA_CHECK_ERROR = 66;

Review comment:
   same as above.

##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
##
@@ -707,6 +707,62 @@ public void testPutKey() throws IOException {
 }
   }
 
+  @Test
+  public void testCheckUsedBytesQuota() throws IOException {

Review comment:
   Can we add used bytes check in each test case?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-541) Ozone Quota support.

2020-09-22 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200489#comment-17200489
 ] 

mingchao zhao edited comment on HDDS-541 at 9/23/20, 3:39 AM:
--

Hi [~amaliujia], I would really welcome you to work on this JIRA together. The 
PR of space quota for volume/bucket is now nearly complete. You can start with 
NameSpace (count quota).

In addition, you can break up the NameSpace JIRA into smaller Jiras. As I split 
space Quota.
There are a few other Jiras that have been listed, which you can also do.


was (Author: micahzhao):
Hi [~amaliujia], I would really welcome you to work on this JIRA together. The 
PR corresponding to space quota for volume/bucket is now nearly complete. You 
can start with NameSpace (count quota).

In addition, you can break up the NameSpace JIRA into smaller Jiras. As I split 
space Quota

> Ozone Quota support.
> 
>
> Key: HDDS-541
> URL: https://issues.apache.org/jira/browse/HDDS-541
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Namit Maheshwari
>Assignee: mingchao zhao
>Priority: Major
>  Labels: Triaged
>  Time Spent: 96h
>  Remaining Estimate: 120h
>
> Create a volume with just 1 MB as quota
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
> --quota=1MB --user=root /hive
> 2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as 
> owner and quota set to 1048576 bytes.
> {code}
> Now create a bucket and put a big key greater than 1MB in the bucket
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
> /hive/bucket1
> 2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
> Versioning false and Storage Type set to DISK
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> -rw-r--r-- 1 root root 165903437 Sep 21 13:16 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> volume/bucket/key name required in putKey
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
> "modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
> "size" : 165903437,
> "keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
> "keyLocations" : [ {
> "containerID" : 2,
> "localID" : 100772661343420416,
> "length" : 134217728,
> "offset" : 0
> }, {
> "containerID" : 3,
> "localID" : 100772661661007873,
> "length" : 31685709,
> "offset" : 0
> } ]
> }{code}
> It was able to put a 165 MB file on a volume with just 1MB quota.
>  
> Currently Ozone haven't support Quota, So I think this should be a new 
> feature .
>  The design document can be referred to the attachment. ([design google 
> docs|https://docs.google.com/document/d/1ohbGn5N7FN6OD15xMShHH2SrtZRYx0-zUf9vjatn_OM/edit])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3729) Support Namespace Level quota for Volume

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao reassigned HDDS-3729:
---

Assignee: Rui Wang  (was: mingchao zhao)

> Support Namespace Level quota for Volume
> 
>
> Key: HDDS-3729
> URL: https://issues.apache.org/jira/browse/HDDS-3729
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3729) Support Namespace Level quota for Volume

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao reassigned HDDS-3729:
---

Assignee: Rui Wang

> Support Namespace Level quota for Volume
> 
>
> Key: HDDS-3729
> URL: https://issues.apache.org/jira/browse/HDDS-3729
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3729) Support Namespace Level quota for Volume

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao reassigned HDDS-3729:
---

Assignee: mingchao zhao

> Support Namespace Level quota for Volume
> 
>
> Key: HDDS-3729
> URL: https://issues.apache.org/jira/browse/HDDS-3729
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3729) Support Namespace Level quota for Volume

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao reassigned HDDS-3729:
---

Assignee: (was: Rui Wang)

> Support Namespace Level quota for Volume
> 
>
> Key: HDDS-3729
> URL: https://issues.apache.org/jira/browse/HDDS-3729
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3728) Support Storage space Level Quota for bucket

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao reassigned HDDS-3728:
---

Assignee: mingchao zhao  (was: Rui Wang)

> Support Storage space Level Quota for bucket
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-541) Ozone Quota support.

2020-09-22 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200489#comment-17200489
 ] 

mingchao zhao edited comment on HDDS-541 at 9/23/20, 3:29 AM:
--

Hi [~amaliujia], I would really welcome you to work on this JIRA together. The 
PR corresponding to space quota for volume/bucket is now nearly complete. You 
can start with NameSpace (count quota).

In addition, you can break up the NameSpace JIRA into smaller Jiras. As I split 
space Quota


was (Author: micahzhao):
Hi [~amaliujia], I would really welcome you to work on this JIRA together. The 
PR corresponding to space quota for volume/bucket is now nearly complete. You 
can start with NameSpace (count quota).

> Ozone Quota support.
> 
>
> Key: HDDS-541
> URL: https://issues.apache.org/jira/browse/HDDS-541
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Namit Maheshwari
>Assignee: mingchao zhao
>Priority: Major
>  Labels: Triaged
>  Time Spent: 96h
>  Remaining Estimate: 120h
>
> Create a volume with just 1 MB as quota
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
> --quota=1MB --user=root /hive
> 2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as 
> owner and quota set to 1048576 bytes.
> {code}
> Now create a bucket and put a big key greater than 1MB in the bucket
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
> /hive/bucket1
> 2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
> Versioning false and Storage Type set to DISK
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> -rw-r--r-- 1 root root 165903437 Sep 21 13:16 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> volume/bucket/key name required in putKey
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
> "modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
> "size" : 165903437,
> "keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
> "keyLocations" : [ {
> "containerID" : 2,
> "localID" : 100772661343420416,
> "length" : 134217728,
> "offset" : 0
> }, {
> "containerID" : 3,
> "localID" : 100772661661007873,
> "length" : 31685709,
> "offset" : 0
> } ]
> }{code}
> It was able to put a 165 MB file on a volume with just 1MB quota.
>  
> Currently Ozone haven't support Quota, So I think this should be a new 
> feature .
>  The design document can be referred to the attachment. ([design google 
> docs|https://docs.google.com/document/d/1ohbGn5N7FN6OD15xMShHH2SrtZRYx0-zUf9vjatn_OM/edit])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3728) Support Storage space Level Quota for bucket

2020-09-22 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200492#comment-17200492
 ] 

mingchao zhao commented on HDDS-3728:
-

[~amaliujia] I'm nearly finished with PR for space quota, you can start with 
Name Space.

> Support Storage space Level Quota for bucket
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3730) Support Namespace Level quota for bucket

2020-09-22 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200490#comment-17200490
 ] 

mingchao zhao commented on HDDS-3730:
-

[~amaliujia] Yes, you can do this

> Support Namespace Level quota for bucket
> 
>
> Key: HDDS-3730
> URL: https://issues.apache.org/jira/browse/HDDS-3730
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-541) Ozone Quota support.

2020-09-22 Thread mingchao zhao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200489#comment-17200489
 ] 

mingchao zhao commented on HDDS-541:


Hi [~amaliujia], I would really welcome you to work on this JIRA together. The 
PR corresponding to space quota for volume/bucket is now nearly complete. You 
can start with NameSpace (count quota).

> Ozone Quota support.
> 
>
> Key: HDDS-541
> URL: https://issues.apache.org/jira/browse/HDDS-541
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Namit Maheshwari
>Assignee: mingchao zhao
>Priority: Major
>  Labels: Triaged
>  Time Spent: 96h
>  Remaining Estimate: 120h
>
> Create a volume with just 1 MB as quota
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
> --quota=1MB --user=root /hive
> 2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as 
> owner and quota set to 1048576 bytes.
> {code}
> Now create a bucket and put a big key greater than 1MB in the bucket
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
> /hive/bucket1
> 2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
> Versioning false and Storage Type set to DISK
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> -rw-r--r-- 1 root root 165903437 Sep 21 13:16 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> volume/bucket/key name required in putKey
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
> "modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
> "size" : 165903437,
> "keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
> "keyLocations" : [ {
> "containerID" : 2,
> "localID" : 100772661343420416,
> "length" : 134217728,
> "offset" : 0
> }, {
> "containerID" : 3,
> "localID" : 100772661661007873,
> "length" : 31685709,
> "offset" : 0
> } ]
> }{code}
> It was able to put a 165 MB file on a volume with just 1MB quota.
>  
> Currently Ozone haven't support Quota, So I think this should be a new 
> feature .
>  The design document can be referred to the attachment. ([design google 
> docs|https://docs.google.com/document/d/1ohbGn5N7FN6OD15xMShHH2SrtZRYx0-zUf9vjatn_OM/edit])



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4269) Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS root directory

2020-09-22 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDDS-4269:
--
Labels: newbie  (was: )

> Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS 
> root directory
> 
>
> Key: HDDS-4269
> URL: https://issues.apache.org/jira/browse/HDDS-4269
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Affects Versions: 1.1.0
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: newbie
>
> Took me some time to debug a trivial bug.
> DataNode crashes after this mysterious error and no explanation:
> {noformat}
> 10:11:44.382 PM   INFOMutableVolumeSetMoving Volume : 
> /var/lib/hadoop-ozone/fake_datanode/data/hdds to failed Volumes
> 10:11:46.287 PM   ERROR   StateContextCritical error occurred in 
> StateMachine, setting shutDownMachine
> 10:11:46.287 PM   ERROR   DatanodeStateMachineDatanodeStateMachine 
> Shutdown due to an critical error
> {noformat}
> Turns out that if there are unexpected files under the hdds directory 
> ($hdds.datanode.dir/hdds), DN thinks the volume is bad and move it to failed 
> volume list, without an error explanation. I was editing the VERSION file and 
> vim created a temp file under the directory. This is impossible to debug 
> without reading the code.
> {code:java|title=HddsVolumeUtil#checkVolume()}
> } else if(hddsFiles.length == 2) {
>   // The files should be Version and SCM directory
>   if (scmDir.exists()) {
> return true;
>   } else {
> logger.error("Volume {} is in Inconsistent state, expected scm " +
> "directory {} does not exist", volumeRoot, scmDir
> .getAbsolutePath());
> return false;
>   }
> } else {
>   // The hdds root dir should always have 2 files. One is Version file
>   // and other is SCM directory.
>   < HERE!
>   return false;
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4269) Ozone DataNode thinks a volume is failed if an unexpected file is in the HDDS root directory

2020-09-22 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDDS-4269:
-

 Summary: Ozone DataNode thinks a volume is failed if an unexpected 
file is in the HDDS root directory
 Key: HDDS-4269
 URL: https://issues.apache.org/jira/browse/HDDS-4269
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Affects Versions: 1.1.0
Reporter: Wei-Chiu Chuang


Took me some time to debug a trivial bug.

DataNode crashes after this mysterious error and no explanation:
{noformat}
10:11:44.382 PM INFOMutableVolumeSetMoving Volume : 
/var/lib/hadoop-ozone/fake_datanode/data/hdds to failed Volumes
10:11:46.287 PM ERROR   StateContextCritical error occurred in 
StateMachine, setting shutDownMachine
10:11:46.287 PM ERROR   DatanodeStateMachineDatanodeStateMachine Shutdown 
due to an critical error
{noformat}
Turns out that if there are unexpected files under the hdds directory 
($hdds.datanode.dir/hdds), DN thinks the volume is bad and move it to failed 
volume list, without an error explanation. I was editing the VERSION file and 
vim created a temp file under the directory. This is impossible to debug 
without reading the code.
{code:java|title=HddsVolumeUtil#checkVolume()}
} else if(hddsFiles.length == 2) {
  // The files should be Version and SCM directory
  if (scmDir.exists()) {
return true;
  } else {
logger.error("Volume {} is in Inconsistent state, expected scm " +
"directory {} does not exist", volumeRoot, scmDir
.getAbsolutePath());
return false;
  }
} else {
  // The hdds root dir should always have 2 files. One is Version file
  // and other is SCM directory.
  < HERE!
  return false;
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx edited a comment on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx edited a comment on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-697011800


   There are some follow up work items for the 'Prepare for Upgrade' flow. 
   
   - Purging Logs after applying the last txn.
   - Validating that an OM is "prepared for upgrade" when starting of in a 
newer version.
   - Acceptance Tests
   
   These work items will be handled in subsequent PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-697011800


   There are some follow up work items for Prepare for Upgrade flow. 
   
   - Purging Logs after applying the last txn.
   - Validating that an OM is "prepared for upgrade" when starting of in a 
newer version.
   - Acceptance Tests
   
   These work items will be handled in subsequent PRs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492976213



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}

Review comment:
   Maybe not. In the Ratis StateMachineUpdater, the takeSnapshot() method 
uses this config to  purge logs immediately after taking a state machine 
snapshot. Hence, I thought that it was good to have a check in place to make 
sure no one changes the config from within. I am ok with removing it for now, 
and handling snapshot + log purge in HDDS-4268 as a follow up patch. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4182) Onboard HDDS-3869 into Layout version management

2020-09-22 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-4182:
---

Assignee: (was: Aravindan Vijayan)

> Onboard HDDS-3869 into Layout version management
> 
>
> Key: HDDS-4182
> URL: https://issues.apache.org/jira/browse/HDDS-4182
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Priority: Major
>
> In HDDS-3869 (Use different column families for datanode block and metadata), 
>  there was a backward compatible change made in the Ozone datanode RocksDB. 
> This JIRA tracks the effort to use a "Layout Version" to track this change 
> such that it is NOT used before finalizing the cluster.
> cc [~erose], [~hkoneru]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx edited a comment on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx edited a comment on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-696839280


   > Hi @avijayanhwx
   > 
   > the initial changes look good to me thank you for sharing the wip state, 
and it seems to be a good direction so far.
   > Maybe one minor comment from me as well, are we sure we want to add 
--prepareForDowngrade as an option alias? It suggests that we might support 
downgrade, and I fear that might cause some misunderstandings.
   > I am unsure whether we can use the same functionality to get back to an 
older version in all scenarios either.
   
   Thanks for the review @fapifta. I agree "downgrades" may not be supportable 
from any arbitrary state (post finalize). Since this is a specialized command, 
I see no harm in adding the alias. When the downgrade is not supported, 
starting the component in the older version will be flagged out anyway. And, 
this prepare step is a must do for any supported downgrade. In the future, we 
may be able to add a bit of validation on the prepareDowngrade step to check 
finalization state or some marker on disk.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3730) Support Namespace Level quota for bucket

2020-09-22 Thread Rui Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Wang reassigned HDDS-3730:
--

Assignee: Rui Wang

> Support Namespace Level quota for bucket
> 
>
> Key: HDDS-3730
> URL: https://issues.apache.org/jira/browse/HDDS-3730
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3728) Support Storage space Level Quota for bucket

2020-09-22 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200212#comment-17200212
 ] 

Rui Wang commented on HDDS-3728:


[~simonss] [~micahzhao] do you mind me working on this JIRA?

> Support Storage space Level Quota for bucket
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3730) Support Namespace Level quota for bucket

2020-09-22 Thread Rui Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17200213#comment-17200213
 ] 

Rui Wang commented on HDDS-3730:


[~simonss] [~micahzhao] do you mind me working on this JIRA? 

This one seems related to HDDS-3728

> Support Namespace Level quota for bucket
> 
>
> Key: HDDS-3730
> URL: https://issues.apache.org/jira/browse/HDDS-3730
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3728) Support Storage space Level Quota for bucket

2020-09-22 Thread Rui Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Wang reassigned HDDS-3728:
--

Assignee: Rui Wang

> Support Storage space Level Quota for bucket
> 
>
> Key: HDDS-3728
> URL: https://issues.apache.org/jira/browse/HDDS-3728
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: Rui Wang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-696845211


   > Only some minor comments above.
   > In additional, can you add the new unit test for PrepareForUpgrade 
scenario? Current unit test change cannot cover this.
   
   Thanks for the review @linyiqun. I am planning to write tests in my next 
commit and bring this PR out of "Draft" status. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-696839280


   > Hi @avijayanhwx
   > 
   > the initial changes look good to me thank you for sharing the wip state, 
and it seems to be a good direction so far.
   > Maybe one minor comment from me as well, are we sure we want to add 
--prepareForDowngrade as an option alias? It suggests that we might support 
downgrade, and I fear that might cause some misunderstandings.
   > I am unsure whether we can use the same functionality to get back to an 
older version in all scenarios either.
   
   Thanks for the review @fapifta. I agree "downgrades" may not be supportable 
from each any arbitrary state (post finalize). Since this is a specialized 
command, I see no harm in adding the alias. When the downgrade is not 
supported, starting the component in the older version will be flagged out 
anyway. And, this prepare step is a must do for any supported downgrade. In the 
future, we may be able to add a bit of validation on the prepareDowngrade step 
to check finalization state or some marker on disk.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492878138



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
##
@@ -98,6 +98,28 @@ public void initOm()
 }
   }
 
+
+  /**
+   * This function implements a sub-command to allow the OM to be
+   * "prepared for upgrade".
+   */
+  @CommandLine.Command(name = "--prepareForUpgrade",
+  aliases = {"--prepareForDowngrade", "--flushTransactions"},

Review comment:
   +1 to @fapifta's reply.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


avijayanhwx commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492877283



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}
+
+waitForAllTxnsApplied(omRatisServer.getOmStateMachine(),
+omRatisServer.getRaftGroup(),
+(RaftServerProxy) omRatisServer.getServer(),
+TimeUnit.MINUTES.toSeconds(5));

Review comment:
   Thanks, will change it to a variable.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


hanishakoneru commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492845659



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}

Review comment:
   OMRatisServer always sets this property to true. It is not configurable. 
Is this check still needed?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4268) Prepare for Upgrade step should purge the log after waiting for the last txn to be applied.

2020-09-22 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-4268:
---

Assignee: Aravindan Vijayan

> Prepare for Upgrade step should purge the log after waiting for the last txn 
> to be applied.
> ---
>
> Key: HDDS-4268
> URL: https://issues.apache.org/jira/browse/HDDS-4268
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>
> This is a follow up task from HDDS-4227 in which the prepare 
> upgrade/downgrade task should purge the Raft log immediately after waiting 
> for the last txn to be applied. This is to make sure that we dont "apply" 
> transactions in different versions of the code across the quorum. A lagging 
> follower will use a Ratis snapshot to bootstrap itself on restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4268) Prepare for Upgrade step should purge the log after waiting for the last txn to be applied.

2020-09-22 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-4268:
---

 Summary: Prepare for Upgrade step should purge the log after 
waiting for the last txn to be applied.
 Key: HDDS-4268
 URL: https://issues.apache.org/jira/browse/HDDS-4268
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Aravindan Vijayan


This is a follow up task from HDDS-4227 in which the prepare upgrade/downgrade 
task should purge the Raft log immediately after waiting for the last txn to be 
applied. This is to make sure that we dont "apply" transactions in different 
versions of the code across the quorum. A lagging follower will use a Ratis 
snapshot to bootstrap itself on restart.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3297) TestOzoneClientKeyGenerator is flaky

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3297:
-
Labels: TriagePending flaky-test ozone-flaky-test pull-request-available  
(was: TriagePending flaky-test ozone-flaky-test)

> TestOzoneClientKeyGenerator is flaky
> 
>
> Key: HDDS-3297
> URL: https://issues.apache.org/jira/browse/HDDS-3297
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Reporter: Marton Elek
>Priority: Critical
>  Labels: TriagePending, flaky-test, ozone-flaky-test, 
> pull-request-available
> Attachments: 
> org.apache.hadoop.ozone.freon.TestOzoneClientKeyGenerator-output.txt
>
>
> Sometimes it's hanging and stopped after a timeout.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 opened a new pull request #1442: HDDS-3297. Enable TestOzoneClientKeyGenerator.

2020-09-22 Thread GitBox


aryangupta1998 opened a new pull request #1442:
URL: https://github.com/apache/hadoop-ozone/pull/1442


   ## What changes were proposed in this pull request?
   
   Enable TestOzoneClientKeyGenerator.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3297
   
   ## How was this patch tested?
   
   Tested Manually
   Also, test triggered 20 times
   https://github.com/aryangupta1998/hadoop-ozone/actions/runs/267051707
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3966) Intermittent crash in TestOMRatisSnapshots

2020-09-22 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3966:
-
Labels: pull-request-available  (was: )

> Intermittent crash in TestOMRatisSnapshots
> --
>
> Key: HDDS-3966
> URL: https://issues.apache.org/jira/browse/HDDS-3966
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> TestOMRatisSnapshots was recently enabled and is crashing intermittently:
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1690/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/14/1710/it-hdds-om
> https://github.com/elek/ozone-build-results/tree/master/2020/07/15/1713/it-hdds-om



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] aryangupta1998 opened a new pull request #1441: HDDS-3966. Enable TestOMRatisSnapshots.

2020-09-22 Thread GitBox


aryangupta1998 opened a new pull request #1441:
URL: https://github.com/apache/hadoop-ozone/pull/1441


   ## What changes were proposed in this pull request?
   
   Enable TestOMRatisSnapshots.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3966
   
   ## How was this patch tested?
   
   Tested Manually.
   Also triggered the test 20 times
   https://github.com/aryangupta1998/hadoop-ozone/actions/runs/267042410
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-541) Ozone Quota support.

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao updated HDDS-541:
---
Description: 
Create a volume with just 1 MB as quota
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
--quota=1MB --user=root /hive
2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as owner 
and quota set to 1048576 bytes.
{code}
Now create a bucket and put a big key greater than 1MB in the bucket
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
/hive/bucket1
2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
Versioning false and Storage Type set to DISK
[root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
../../ozone-0.3.0-SNAPSHOT.tar.gz
-rw-r--r-- 1 root root 165903437 Sep 21 13:16 ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
volume/bucket/key name required in putKey
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
{
"version" : 0,
"md5hash" : null,
"createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
"modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
"size" : 165903437,
"keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
"keyLocations" : [ {
"containerID" : 2,
"localID" : 100772661343420416,
"length" : 134217728,
"offset" : 0
}, {
"containerID" : 3,
"localID" : 100772661661007873,
"length" : 31685709,
"offset" : 0
} ]
}{code}
It was able to put a 165 MB file on a volume with just 1MB quota.

 

Currently Ozone haven't support Quota, So I think this should be a new feature .
 The design document can be referred to the attachment. ([design google 
docs|https://docs.google.com/document/d/1ohbGn5N7FN6OD15xMShHH2SrtZRYx0-zUf9vjatn_OM/edit])

  was:
Create a volume with just 1 MB as quota
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
--quota=1MB --user=root /hive
2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as owner 
and quota set to 1048576 bytes.
{code}
Now create a bucket and put a big key greater than 1MB in the bucket
{code:java}
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
/hive/bucket1
2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
Versioning false and Storage Type set to DISK
[root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
../../ozone-0.3.0-SNAPSHOT.tar.gz
-rw-r--r-- 1 root root 165903437 Sep 21 13:16 ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
volume/bucket/key name required in putKey
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
[root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
/hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
{
"version" : 0,
"md5hash" : null,
"createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
"modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
"size" : 165903437,
"keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
"keyLocations" : [ {
"containerID" : 2,
"localID" : 100772661343420416,
"length" : 134217728,
"offset" : 0
}, {
"containerID" : 3,
"localID" : 100772661661007873,
"length" : 31685709,
"offset" : 0
} ]
}{code}
It was able to put a 165 MB file on a volume with just 1MB quota.

 

Currently Ozone haven't support Quota, So I think this should be a new feature .
 The design document can be referred to the attachment. 
([link|https://docs.google.com/document/d/1ohbGn5N7FN6OD15xMShHH2SrtZRYx0-zUf9vjatn_OM/edit?usp=sharing]
 to google docs)


> Ozone Quota support.
> 
>
> Key: HDDS-541
> URL: https://issues.apache.org/jira/browse/HDDS-541
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Namit Maheshwari
>Assignee: mingchao zhao
>Priority: Major
>  Labels: Triaged
>  Time Spent: 96h
>  Remaining Estimate: 120h
>
> Create a volume with just 1 MB as quota
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
> --quota=1MB --user=root /hive
> 2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as 
> owner and quota set to 1048576 bytes.
> {code}
> Now create a bucket and put a big key greater than 1MB in the bucket
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
> /hive/bucket1
> 2018-09-23 02:10:38,003 [main] INFO 

[jira] [Updated] (HDDS-541) Ozone Quota support.

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao updated HDDS-541:
---
Attachment: (was: Ozone Quota Design.pdf)

> Ozone Quota support.
> 
>
> Key: HDDS-541
> URL: https://issues.apache.org/jira/browse/HDDS-541
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Namit Maheshwari
>Assignee: mingchao zhao
>Priority: Major
>  Labels: Triaged
>  Time Spent: 96h
>  Remaining Estimate: 120h
>
> Create a volume with just 1 MB as quota
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh volume create 
> --quota=1MB --user=root /hive
> 2018-09-23 02:10:11,283 [main] INFO - Creating Volume: hive, with root as 
> owner and quota set to 1048576 bytes.
> {code}
> Now create a bucket and put a big key greater than 1MB in the bucket
> {code:java}
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh bucket create 
> /hive/bucket1
> 2018-09-23 02:10:38,003 [main] INFO - Creating Bucket: hive/bucket1, with 
> Versioning false and Storage Type set to DISK
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ls -l 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> -rw-r--r-- 1 root root 165903437 Sep 21 13:16 
> ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> volume/bucket/key name required in putKey
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key put 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz ../../ozone-0.3.0-SNAPSHOT.tar.gz
> [root@ctr-e138-1518143905142-481027-01-02 bin]# ./ozone sh key info 
> /hive/bucket1/ozone-0.3.0-SNAPSHOT.tar.gz
> {
> "version" : 0,
> "md5hash" : null,
> "createdOn" : "Sun, 23 Sep 2018 02:13:02 GMT",
> "modifiedOn" : "Sun, 23 Sep 2018 02:13:08 GMT",
> "size" : 165903437,
> "keyName" : "ozone-0.3.0-SNAPSHOT.tar.gz",
> "keyLocations" : [ {
> "containerID" : 2,
> "localID" : 100772661343420416,
> "length" : 134217728,
> "offset" : 0
> }, {
> "containerID" : 3,
> "localID" : 100772661661007873,
> "length" : 31685709,
> "offset" : 0
> } ]
> }{code}
> It was able to put a 165 MB file on a volume with just 1MB quota.
>  
> Currently Ozone haven't support Quota, So I think this should be a new 
> feature .
>  The design document can be referred to the attachment. 
> ([link|https://docs.google.com/document/d/1ohbGn5N7FN6OD15xMShHH2SrtZRYx0-zUf9vjatn_OM/edit?usp=sharing]
>  to google docs)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4106) Volume space: Supports clearing spaceQuota

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao reassigned HDDS-4106:
---

Assignee: mingchao zhao

> Volume space: Supports clearing spaceQuota
> --
>
> Key: HDDS-4106
> URL: https://issues.apache.org/jira/browse/HDDS-4106
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>
> Volume space quota supports deleting spaceQuota.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4105) Volume space: update spaceQuota should check the current quotaUsageInBytes

2020-09-22 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao reassigned HDDS-4105:
---

Assignee: mingchao zhao

> Volume space: update spaceQuota should check the current quotaUsageInBytes 
> ---
>
> Key: HDDS-4105
> URL: https://issues.apache.org/jira/browse/HDDS-4105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>
> update spaceQuota should check the current quotaUsage:
> 1.If quotaUsage> new spaceQuota, updates are not allowed.
> 2.When updating spaceQuota, we need to update quotaUsage by getting the size 
> of all keys in the current volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


linyiqun commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492801087



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1179,15 +1229,22 @@ public void start() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-omRpcServer.start();
-isOmRpcServerRunning = true;
 
+if (!prepareForUpgrade) {
+  omRpcServer.start();
+  isOmRpcServerRunning = true;
+}

Review comment:
   Okay, enable RPC server via next startup makes sense to me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


linyiqun commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492798607



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}
+
+waitForAllTxnsApplied(omRatisServer.getOmStateMachine(),
+omRatisServer.getRaftGroup(),
+(RaftServerProxy) omRatisServer.getServer(),
+TimeUnit.MINUTES.toSeconds(5));

Review comment:
   >Also in 5 minutes I would expect in all cases that the unapplied 
transactions can be applied, as the number of this kind of transactions should 
not be too much as far as I know, or if it is then the system is not healthy 
anyway.
   
   I'm okay to let 5 mins as current threshold wait time, only one minor 
comment: can we define a variable for this time value rather hard-coded in 
method here? 
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4197) Failed to load existing service definition files: ...SubcommandWithParent

2020-09-22 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4197:
---
Labels: jdk11  (was: jdk11 pull-request-available)

> Failed to load existing service definition files: ...SubcommandWithParent
> -
>
> Key: HDDS-4197
> URL: https://issues.apache.org/jira/browse/HDDS-4197
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: build
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: jdk11
> Fix For: 1.1.0
>
>
> {code}
> [INFO] Apache Hadoop HDDS Tools ... FAILURE
> ...
> [ERROR] Failed to load existing service definition files: 
> java.nio.file.NoSuchFileException: 
> hadoop-hdds/tools/target/classes/META-INF/services/org.apache.hadoop.hdds.cli.SubcommandWithParent
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on pull request #1175: HDDS-2766. security/SecuringDataNodes.md

2020-09-22 Thread GitBox


cxorm commented on pull request #1175:
URL: https://github.com/apache/hadoop-ozone/pull/1175#issuecomment-696701388


   Thanks @iamabug for the work, 
   Overall looks great to me (including the fixes), 
   I would committed it If all CI checks passed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4267) Ozone command always print warn message before execution

2020-09-22 Thread Yiqun Lin (Jira)
Yiqun Lin created HDDS-4267:
---

 Summary: Ozone command always print warn message before execution
 Key: HDDS-4267
 URL: https://issues.apache.org/jira/browse/HDDS-4267
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone CLI
Reporter: Yiqun Lin


Ozone command always print warn message before execution:
{noformat}
[hdfs@lyq yiqlin]$ ~/ozone/bin/ozone version
/home/hdfs/releases/ozone-1.0.0/etc/hadoop/hadoop-env.sh: line 34: ulimit: core 
file size: cannot modify limit: Operation not permitted
{noformat}
{noformat}
[hdfs@ yiqlin]$ ~/ozone/bin/ozone sh volume list
/home/hdfs/releases/ozone-1.0.0/etc/hadoop/hadoop-env.sh: line 34: ulimit: core 
file size: cannot modify limit: Operation not permitted
{noformat}
This is because that hdfs in my cluster cannot execute below command in 
hadoop-en.sh
{noformat}
# # Enable core dump when crash in C++
ulimit -c unlimited
{noformat}
ulimit -c was introduced in JIRA HDDS-3941. The root cause seems that ulimit -c 
requires root user to execute but hdfs user in my local is a non-root user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


fapifta commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492706805



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
##
@@ -98,6 +98,28 @@ public void initOm()
 }
   }
 
+
+  /**
+   * This function implements a sub-command to allow the OM to be
+   * "prepared for upgrade".
+   */
+  @CommandLine.Command(name = "--prepareForUpgrade",
+  aliases = {"--prepareForDowngrade", "--flushTransactions"},

Review comment:
   This command should be issued when the OM is already stopped before the 
upgrade of software bits. This is a command that starts up the OM code in a 
special way, with that it can start up only the current local OM, as I 
understand.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


fapifta commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492705325



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1179,15 +1229,22 @@ public void start() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-omRpcServer.start();
-isOmRpcServerRunning = true;
 
+if (!prepareForUpgrade) {
+  omRpcServer.start();
+  isOmRpcServerRunning = true;
+}

Review comment:
   As we discussed this with @avijayanhwx during internal design 
discussions, after OM is started in prepareForUpgrade mode, it will tear down, 
when the last transaction is applied from the raft log, and a snapshot is taken 
in raft, so with that the OM reached a state when all transactions are applied 
and none needs to be applied after the next startup.
   
   This is to ensure that all the transactions are applied with the code that 
was there when the transactions arrived in, so with that we can ensure 
consistency of the state of different OM instances.
   
   After this is finished, and OM tear down from prepareForUpgrade, one will 
need a normal startup of OM to bring it up again, and at that time the RPC 
server will start properly.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


fapifta commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r492702681



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}
+
+waitForAllTxnsApplied(omRatisServer.getOmStateMachine(),
+omRatisServer.getRaftGroup(),
+(RaftServerProxy) omRatisServer.getServer(),
+TimeUnit.MINUTES.toSeconds(5));

Review comment:
   Are you sure we want to add a configuration for this one? I would argue 
we do not need one more configurable thing to this one at least.
   prepareForUpgrade is a special startup type of OM, during which it applies 
all transactions that are in the raft log.
   If 5 minutes is not enough to apply all transactions in the raft log, then 
the process will shut down and let the user know that some of the transactions 
were not applied, so that the user can start the process again as a last 
resort, to apply further transactions. If we assume that at least a few 
transactions are applied sooner or later the user can get to a state where 
everything is applied, and if none of the transactions can be applied within 5 
minutes, that sounds like a serious problem anyways, independently from the 
upgrade.
   
   Also in 5 minutes I would expect in all cases that the unapplied 
transactions can be applied, as the number of this kind of transactions should 
not be too much as far as I know, or if it is then the system is not healthy 
anyway.
   
   Can you please elaborate, why would it be useful to make this configurable?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-22 Thread GitBox


fapifta commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-696692461


   Hi @avijayanhwx 
   
   the initial changes look good to me thank you for sharing the wip state, and 
it seems to be a good direction so far.
   Maybe one minor comment from me as well, are we sure we want to add 
--prepareForDowngrade as an option alias? It suggests that we might support 
downgrade, and I fear that might cause some misunderstandings.
   I am unsure whether we can use the same functionality to get back to an 
older version in all scenarios either.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on pull request #1425: HDDS-2981 Add unit tests for Proto [de]serialization

2020-09-22 Thread GitBox


fapifta commented on pull request #1425:
URL: https://github.com/apache/hadoop-ozone/pull/1425#issuecomment-696680680


   Hello @llemec 
   
   thank you for your comments and continued work on this PR. Indeed, I can 
accept the way based on your argument.
   
   +1(non-binding) to merge the changes. Let's wait a committer to get it 
reviewed once more and committed if no further comments ;)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1272: HDDS-2660. Create insight point for datanode container protocol

2020-09-22 Thread GitBox


adoroszlai commented on pull request #1272:
URL: https://github.com/apache/hadoop-ozone/pull/1272#issuecomment-696662025


   > > Thanks @elek for updating the patch. Interestingly log -v stopped 
working, even if I execute log first (as mentioned previously).
   > 
   > I double-checked and it worked for me. When you use leader, the messages 
are displayed immediately, when you use follower, the messages will be appeared 
only after the commit...
   
   It seems to depend on the content.  Plain text files work fine, but it stops 
working on the first binary, eg. `ozone freon ockg -n1 -t1`.  I guess it's 
caused by control chars in the random data.
   
   I think we should avoid logging chunk content.  
`ContainerCommandRequestMessage` implements related logic to clear data.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4266) CreateFile : store parent dir entries into DirTable and file entry into separate FileTable

2020-09-22 Thread Rakesh Radhakrishnan (Jira)
Rakesh Radhakrishnan created HDDS-4266:
--

 Summary: CreateFile : store parent dir entries into DirTable and 
file entry into separate FileTable
 Key: HDDS-4266
 URL: https://issues.apache.org/jira/browse/HDDS-4266
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Rakesh Radhakrishnan
Assignee: Rakesh Radhakrishnan


This task is to handle #createFile ofs client request. Here the idea is to 
store all the missing parents in the {{keyname}} into 'DirTable' and file into 
'FileTable'.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4222) [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread Rakesh Radhakrishnan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Radhakrishnan updated HDDS-4222:
---
Status: Patch Available  (was: Open)

> [OzoneFS optimization] Provide a mechanism for efficient path lookup
> 
>
> Key: HDDS-4222
> URL: https://issues.apache.org/jira/browse/HDDS-4222
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
> Attachments: Ozone FS Optimizations - Efficient Lookup using cache.pdf
>
>
> With the new file system HDDS-2939 like semantics design it requires multiple 
> DB lookups to traverse the path component in top-down fashion. This task to 
> discuss use cases and proposals to reduce the performance penalties during 
> path lookups.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1431: HDDS-4254. Bucket space: add usedBytes and update it when create and delete key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1431:
URL: https://github.com/apache/hadoop-ozone/pull/1431#discussion_r492614881



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
##
@@ -99,6 +103,11 @@ protected void addToDBBatch(OMMetadataManager 
omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   the same. please double check the rest files. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1431: HDDS-4254. Bucket space: add usedBytes and update it when create and delete key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1431:
URL: https://github.com/apache/hadoop-ozone/pull/1431#discussion_r492614881



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCreateResponse.java
##
@@ -99,6 +103,11 @@ protected void addToDBBatch(OMMetadataManager 
omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   the same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1431: HDDS-4254. Bucket space: add usedBytes and update it when create and delete key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1431:
URL: https://github.com/apache/hadoop-ozone/pull/1431#discussion_r492614631



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMKeyCommitResponse.java
##
@@ -77,6 +80,11 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   same as above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1431: HDDS-4254. Bucket space: add usedBytes and update it when create and delete key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1431:
URL: https://github.com/apache/hadoop-ozone/pull/1431#discussion_r492614416



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/key/OMAllocateBlockResponse.java
##
@@ -72,5 +76,10 @@ public void addToDBBatch(OMMetadataManager omMetadataManager,
 omMetadataManager.getVolumeTable().putWithBatch(batchOperation,
 omMetadataManager.getVolumeKey(omVolumeArgs.getVolume()),
 omVolumeArgs);
+// update bucket usedBytes.
+omMetadataManager.getBucketTable().putWithBatch(batchOperation,
+omMetadataManager.getBucketKey(omVolumeArgs.getVolume(),
+omBucketInfo.getBucketName()),
+omBucketInfo);

Review comment:
   the last two lines can be merged into one line.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-22 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r492595294



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {
+suggestedLeaderCount.put(dn, suggestedLeaderCount.get(dn) + 1);
+  }
+} catch (PipelineNotFoundException e) {
+  LOG.debug("Pipeline not found in pipeline state manager : {}",
+  pipelineID, e);
+}
+  }
+}
+
+return suggestedLeaderCount;
+  }
+
+  private DatanodeDetails getSuggestedLeader(List dns) {
+Map suggestedLeaderCount =

Review comment:
   @bshashikant I agree. @xiaoyuyao What do you think of this suggestion?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr edited a comment on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


rakeshadr edited a comment on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!
   Updated another commit addressing the comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492582953



##
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##
@@ -2521,4 +2521,32 @@
   filesystem semantics.
 
   
+
+  
+ozone.om.metadata.cache.directory

Review comment:
   Fixed test failure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


rakeshadr commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696600100


   Thanks a lot @linyiqun for the useful review comments!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


rakeshadr commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492582620



##
File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(10);
+
+  @Before
+  public void setup() {
+//initialize config
+conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+//1. Verify disabling cache
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+CachePolicy.DIR_NOCACHE.getPolicy());
+CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+dirCacheStore.getCachePolicy());
+
+//2. Invalid cache policy
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+//3. Directory LRU cache policy
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+CachePolicy.DIR_LRU.getPolicy());
+conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+File testDir = GenericTestUtils.getRandomizedTestDir();
+conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+testDir.toString());
+
+omMetadataManager = new OmMetadataManagerImpl(conf);
+CacheStore dirCacheStore =
+omMetadataManager.getOMCacheManager().getDirCache();
+Assert.assertEquals("CachePolicy Mismatches!", CachePolicy.DIR_LRU,
+dirCacheStore.getCachePolicy());
+
+OMCacheKey dirA = new OMCacheKey<>("512/a");
+OMCacheValue dirAObjID = new OMCacheValue<>(1025L);
+OMCacheKey dirB = new OMCacheKey<>(dirAObjID + "/b");
+OMCacheValue dirBObjID = new OMCacheValue<>(1026L);
+dirCacheStore.put(dirA, dirAObjID);
+dirCacheStore.put(dirB, dirBObjID);
+// Step1. Cached Entries => {a, b}
+Assert.assertEquals("Unexpected Cache Value",
+dirAObjID.getCacheValue(), 
dirCacheStore.get(dirA).getCacheValue());
+Assert.assertEquals("Unexpected Cache Value",
+dirBObjID.getCacheValue(), 
dirCacheStore.get(dirB).getCacheValue());
+
+// Step2. Verify eviction
+// Cached Entries {frontEntry, rearEntry} => {c, b}
+OMCacheKey dirC = new OMCacheKey<>(dirBObjID + "/c");
+

[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#discussion_r492575126



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
##
@@ -707,6 +707,62 @@ public void testPutKey() throws IOException {
 }
   }
 
+  @Test
+  public void testCheckUsedBytesQuota() throws IOException {

Review comment:
   Can we add used bytes check in each test case?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


linyiqun commented on pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#issuecomment-696588382


   > > Hi @rakeshadr , some initial review comments below.
   > > In additional, one question from me: this is the first task of dir 
cache, there will be other further subtasks. But this part of work depends on 
[HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) be completed. So 
that means [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) tasks 
cannot be merged immediately and be blocked for a long time? How do we plan to 
coordinate with [HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) 
and [HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) feature 
development works?
   > 
   > Good comment. yes, cache should be integrated eventually(case-by-case) to 
get full benefit. But 
[HDDS-4222](https://issues.apache.org/jira/browse/HDDS-4222) can be upstreamed 
separately and not blocked, I feel.
   > 
   > Cache can be integrated once 
[HDDS-2949](https://issues.apache.org/jira/browse/HDDS-2949) is finished. 
During dir creation, it checks the 
[DIR_EXISTS](https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequest.java#L185)
 and cache can be integrated into this call path.
   > 
   > Later, while implementing File, lookups tasks, delete, rename etc will 
integrate into that areas.
   
   Get it, sounds good to me.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1437: HDDS-4222: [OzoneFS optimization] Provide a mechanism for efficient path lookup

2020-09-22 Thread GitBox


linyiqun commented on a change in pull request #1437:
URL: https://github.com/apache/hadoop-ozone/pull/1437#discussion_r492562181



##
File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
##
@@ -2521,4 +2521,32 @@
   filesystem semantics.
 
   
+
+  
+ozone.om.metadata.cache.directory

Review comment:
   This name also needs to be updated, current unit test was broken by this
   > 
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareConfigurationClassAgainstXml:493
 class org.apache.hadoop.ozone.OzoneConfigKeys class 
org.apache.hadoop.hdds.scm.ScmConfigKeys class 
org.apache.hadoop.ozone.om.OMConfigKeys class 
org.apache.hadoop.hdds.HddsConfigKeys class 
org.apache.hadoop.ozone.recon.ReconServerConfigKeys class 
org.apache.hadoop.ozone.s3.S3GatewayConfigKeys class 
org.apache.hadoop.hdds.scm.server.SCMHTTPServerConfig has 1 variables missing 
in ozone-default.xml Entries:   ozone.om.metadata.cache.directory.policy 
expected:<0> but was:<1>
   [ERROR]   
TestOzoneConfigurationFields>TestConfigurationFieldsBase.testCompareXmlAgainstConfigurationClass

##
File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/cache/TestOMMetadataCache.java
##
@@ -0,0 +1,276 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.cache;
+
+import org.apache.hadoop.hdds.HddsConfigKeys;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OmMetadataManagerImpl;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.File;
+import java.io.IOException;
+
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Testing OMMetadata cache policy class.
+ */
+public class TestOMMetadataCache {
+
+  private OzoneConfiguration conf;
+  private OMMetadataManager omMetadataManager;
+
+  /**
+   * Set a timeout for each test.
+   */
+  @Rule
+  public Timeout timeout = new Timeout(10);
+
+  @Before
+  public void setup() {
+//initialize config
+conf = new OzoneConfiguration();
+  }
+
+  @Test
+  public void testVerifyDirCachePolicies() {
+//1. Verify disabling cache
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+CachePolicy.DIR_NOCACHE.getPolicy());
+CacheStore dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Cache Policy mismatches!", CachePolicy.DIR_NOCACHE,
+dirCacheStore.getCachePolicy());
+
+//2. Invalid cache policy
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY, "InvalidCachePolicy");
+dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Expected NullCache for an invalid CachePolicy",
+CachePolicy.DIR_NOCACHE, dirCacheStore.getCachePolicy());
+
+//3. Directory LRU cache policy
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT);
+dirCacheStore = OMMetadataCacheFactory.getCache(
+OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+OMConfigKeys.OZONE_OM_CACHE_DIR_DEFAULT, conf);
+Assert.assertEquals("Cache Type mismatches!", CachePolicy.DIR_LRU,
+dirCacheStore.getCachePolicy());
+  }
+
+  @Test
+  public void testLRUCacheDirectoryPolicy() throws IOException {
+conf.set(OMConfigKeys.OZONE_OM_CACHE_DIR_POLICY,
+CachePolicy.DIR_LRU.getPolicy());
+conf.setInt(OMConfigKeys.OZONE_OM_CACHE_DIR_INIT_CAPACITY, 1);
+conf.setLong(OMConfigKeys.OZONE_OM_CACHE_DIR_MAX_CAPACITY, 2);
+
+File testDir = GenericTestUtils.getRandomizedTestDir();
+conf.set(HddsConfigKeys.OZONE_METADATA_DIRS,
+testDir.toString());
+
+omMetadataManager = new 

[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#discussion_r492533073



##
File path: hadoop-ozone/interface-client/src/main/proto/OmClientProtocol.proto
##
@@ -314,6 +314,8 @@ enum Status {
 
 PARTIAL_RENAME = 65;
 
+QUOTA_CHECK_ERROR = 66;

Review comment:
   same as above.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#discussion_r492526790



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
##
@@ -229,7 +229,9 @@ public String toString() {
 
 NOT_SUPPORTED_OPERATION,
 
-PARTIAL_RENAME
+PARTIAL_RENAME,
+
+QUOTA_CHECK_ERROR

Review comment:
QUOTA_CHECK_ERROR  -> QUOTA_EXCEEDED





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#discussion_r492526790



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/exceptions/OMException.java
##
@@ -229,7 +229,9 @@ public String toString() {
 
 NOT_SUPPORTED_OPERATION,
 
-PARTIAL_RENAME
+PARTIAL_RENAME,
+
+QUOTA_CHECK_ERROR

Review comment:
QUOTA_CHECK_ERROR  -> QUOTA_EXCEED





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4265) Refactor OzoneQuota to make it easy to support more quota type

2020-09-22 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-4265:


 Summary: Refactor OzoneQuota to make it easy to support more quota 
type
 Key: HDDS-4265
 URL: https://issues.apache.org/jira/browse/HDDS-4265
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Sammi Chen






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r492517940



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -297,4 +300,17 @@ private BucketEncryptionInfoProto getBeinfo(
 CipherSuite.convert(metadata.getCipher(;
 return bekb.build();
   }
+
+  public void checkQuotaBytesValid(OmVolumeArgs omVolumeArgs,
+  OmBucketInfo omBucketInfo) {
+long volumeQuotaInBytes = omVolumeArgs.getQuotaInBytes();
+long quotaInBytes = omBucketInfo.getQuotaInBytes();
+if(volumeQuotaInBytes < quotaInBytes) {

Review comment:
   Need check sum of all bucket quota under the Volume. We also need this 
check when update volume quato. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r492518254



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketSetPropertyRequest.java
##
@@ -150,6 +148,20 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 .setIsVersionEnabled(dbBucketInfo.getIsVersionEnabled());
   }
 
+  //Check quotaInBytes and quotaInCounts to update
+  String volumeKey = omMetadataManager.getVolumeKey(volumeName);
+  OmVolumeArgs omVolumeArgs = omMetadataManager.getVolumeTable()
+  .get(volumeKey);
+  if (checkQuotaBytesValid(omVolumeArgs, omBucketArgs)) {

Review comment:
   same as above. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r492517940



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -297,4 +300,17 @@ private BucketEncryptionInfoProto getBeinfo(
 CipherSuite.convert(metadata.getCipher(;
 return bekb.build();
   }
+
+  public void checkQuotaBytesValid(OmVolumeArgs omVolumeArgs,
+  OmBucketInfo omBucketInfo) {
+long volumeQuotaInBytes = omVolumeArgs.getQuotaInBytes();
+long quotaInBytes = omBucketInfo.getQuotaInBytes();
+if(volumeQuotaInBytes < quotaInBytes) {

Review comment:
   Need check sum of all bucket quota under the Volume.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r492515786



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.shell.volume;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import org.apache.hadoop.ozone.shell.SetSpaceQuotaOptions;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * Executes update volume calls.
+ */
+@Command(name = "setquota",
+description = "Set quota of the volumes")
+public class SetQuotaHandler extends VolumeHandler {
+
+  @CommandLine.Mixin
+  private SetSpaceQuotaOptions quotaOptions;
+
+  @Option(names = {"--bucket-quota"},
+  description = "Bucket counts of the volume to set (eg. 5)")

Review comment:
   set -> create





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r492515197



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/SetQuotaHandler.java
##
@@ -0,0 +1,71 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.shell.volume;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import org.apache.hadoop.ozone.shell.SetSpaceQuotaOptions;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * Executes update volume calls.

Review comment:
   statement is stale. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4264) Uniform naming conventions of Ozone Shell Options.

2020-09-22 Thread mingchao zhao (Jira)
mingchao zhao created HDDS-4264:
---

 Summary: Uniform naming conventions of Ozone Shell Options.
 Key: HDDS-4264
 URL: https://issues.apache.org/jira/browse/HDDS-4264
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: mingchao zhao
 Attachments: image-2020-09-22-14-51-18-968.png

Current Shell command of Ozone, some use hump connection, some use '-' 
connection. We need to unify the naming conventions.
 !image-2020-09-22-14-51-18-968.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-22 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r492511397



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/ClearSpaceQuotaOptions.java
##
@@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell;
+
+import picocli.CommandLine;
+
+/**
+ * Common options for 'clrquota' comands.

Review comment:
   typo  comands 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >