[jira] [Created] (HDDS-4237) Testing Infrastructure for network partitioning
Rui Wang created HDDS-4237: -- Summary: Testing Infrastructure for network partitioning Key: HDDS-4237 URL: https://issues.apache.org/jira/browse/HDDS-4237 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Rui Wang Network partitioning can cause Brian-split case where there are two leaders exist. We need some sort of testing Infrastructure/framework to simulate such case and verify whether our SCM HA implementation can achieve strong consistency. There might be two ways suggested by Mukul Kumar Singh: a) Blockade tests, blockade is a docker based framework where the network for one DN can be isolated from the other b) MiniOzoneChaosCluster - This is a unit test based test, where a random datanode was killed and this helped in finding out issues with the consistency. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-3890) Stop BackgroundPipelineCreator when PipelineManager is closed
[ https://issues.apache.org/jira/browse/HDDS-3890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rui Wang reassigned HDDS-3890: -- Assignee: Rui Wang > Stop BackgroundPipelineCreator when PipelineManager is closed > - > > Key: HDDS-3890 > URL: https://issues.apache.org/jira/browse/HDDS-3890 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM HA >Reporter: Nanda kumar >Assignee: Rui Wang >Priority: Major > > {{BackgroundPipelineCreator}} is started by {{PipelineManager}} but never > stopped. We should stop the {{BackgroundPipelineCreator}} when > {{PipelineManager}} is closed -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1398: HDDS-4210. ResolveBucket during checkAcls fails.
amaliujia commented on a change in pull request #1398: URL: https://github.com/apache/hadoop-ozone/pull/1398#discussion_r487468731 ## File path: hadoop-ozone/dist/src/main/compose/ozonesecure-om-ha/test.sh ## @@ -30,6 +30,8 @@ execute_robot_test scm kinit.robot execute_robot_test scm freon +execute_robot_test scm basic/links.robot Review comment: Out of curiosity: why adding this robot test to both ` hadoop-ozone/dist/src/main/compose/ozone-om-ha-s3/test.sh` and `hadoop-ozone/dist/src/main/compose/ozonesecure-om-ha/test.sh`. Will it just run the same test twice on each CI? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1405: HDDS-4143. Implement a factory for OM Requests that returns an instance based on layout version.
avijayanhwx commented on pull request #1405: URL: https://github.com/apache/hadoop-ozone/pull/1405#issuecomment-691330372 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] runzhiwang commented on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes
runzhiwang commented on pull request #1371: URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-690911218 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lamber-ken removed a comment on pull request #1417: HDDS-4324. Add important comment to ListVolumes logic
lamber-ken removed a comment on pull request #1417: URL: https://github.com/apache/hadoop-ozone/pull/1417#issuecomment-690928333 help This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1361: HDDS-4155. Directory and filename can end up with same name in a path.
bharatviswa504 commented on a change in pull request #1361: URL: https://github.com/apache/hadoop-ozone/pull/1361#discussion_r487246329 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bshashikant merged pull request #1408: HDDS-4217. Remove test TestOzoneContainerRatis
bshashikant merged pull request #1408: URL: https://github.com/apache/hadoop-ozone/pull/1408 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1415: HDDS-4232. Use single thread for KeyDeletingService and OpenKeyCleanupService.
lokeshj1703 commented on pull request #1415: URL: https://github.com/apache/hadoop-ozone/pull/1415#issuecomment-690940004 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lamber-ken commented on pull request #1417: HDDS-4324. Add important comment to ListVolumes logic
lamber-ken commented on pull request #1417: URL: https://github.com/apache/hadoop-ozone/pull/1417#issuecomment-690928333 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] amaliujia commented on pull request #1410: HDDS-3102. ozone getconf command should use the GenericCli parent class
amaliujia commented on pull request #1410: URL: https://github.com/apache/hadoop-ozone/pull/1410#issuecomment-691320364 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on pull request #1413: HDDS-4228: add field 'num' to ALLOCATE_BLOCK of scm audit log.
timmylicheng commented on pull request #1413: URL: https://github.com/apache/hadoop-ozone/pull/1413#issuecomment-690833613 Merging This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] ChenSammi merged pull request #1337: HDDS-4129. change MAX_QUOTA_IN_BYTES to Long.MAX_VALUE.
ChenSammi merged pull request #1337: URL: https://github.com/apache/hadoop-ozone/pull/1337 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bshashikant merged pull request #1409: HDDS-4218. Remove test TestRatisManager
bshashikant merged pull request #1409: URL: https://github.com/apache/hadoop-ozone/pull/1409 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #1396: HDDS-4150. recon.api.TestEndpoints test is flaky
elek commented on pull request #1396: URL: https://github.com/apache/hadoop-ozone/pull/1396#issuecomment-691426288 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1337: HDDS-4129. change MAX_QUOTA_IN_BYTES to Long.MAX_VALUE.
ChenSammi commented on pull request #1337: URL: https://github.com/apache/hadoop-ozone/pull/1337#issuecomment-690919896 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #1336: HDDS-4119. Improve performance of the BufferPool management of Ozone client
elek commented on pull request #1336: URL: https://github.com/apache/hadoop-ozone/pull/1336#issuecomment-691125675 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #1410: HDDS-3102. ozone getconf command should use the GenericCli parent class
elek commented on pull request #1410: URL: https://github.com/apache/hadoop-ozone/pull/1410#issuecomment-690929244 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1398: HDDS-4210. ResolveBucket during checkAcls fails.
bharatviswa504 commented on a change in pull request #1398: URL: https://github.com/apache/hadoop-ozone/pull/1398#discussion_r487178258 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -3523,6 +3533,47 @@ public ResolvedBucket resolveBucketLink(Pair requested) visited); } + /** + * Resolves bucket symlinks. Read permission is required for following links. + * + * @param volumeAndBucket the bucket to be resolved (if it is a link) + * @param {@link OMClientRequest} which has information required to check + * permission. + * @param visited collects link buckets visited during the resolution to + * avoid infinite loops + * @return bucket location possibly updated with its actual volume and bucket + * after following bucket links + * @throws IOException (most likely OMException) if ACL check fails, bucket is + * not found, loop is detected in the links, etc. + */ + private Pair resolveBucketLink( + Pair volumeAndBucket, + Set> visited, + OMClientRequest omClientRequest) throws IOException { Review comment: Made few changes to not to duplicate code. ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAWithData.java ## @@ -69,6 +70,38 @@ public void testAllOMNodesRunning() throws Exception { createKeyTest(true); } + @Test + public void testBucketLinkOps() throws Exception { Review comment: Thanks for the tip. Enabled docker-compose tests. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -3523,6 +3533,47 @@ public ResolvedBucket resolveBucketLink(Pair requested) visited); } + /** + * Resolves bucket symlinks. Read permission is required for following links. + * + * @param volumeAndBucket the bucket to be resolved (if it is a link) + * @param {@link OMClientRequest} which has information required to check + * permission. + * @param visited collects link buckets visited during the resolution to + * avoid infinite loops + * @return bucket location possibly updated with its actual volume and bucket + * after following bucket links + * @throws IOException (most likely OMException) if ACL check fails, bucket is + * not found, loop is detected in the links, etc. + */ + private Pair resolveBucketLink( + Pair volumeAndBucket, + Set> visited, + OMClientRequest omClientRequest) throws IOException { Review comment: Made few changes to not to duplicate code. ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAWithData.java ## @@ -69,6 +70,38 @@ public void testAllOMNodesRunning() throws Exception { createKeyTest(true); } + @Test + public void testBucketLinkOps() throws Exception { Review comment: Thanks for the tip. Enabled docker-compose tests. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -3523,6 +3533,47 @@ public ResolvedBucket resolveBucketLink(Pair requested) visited); } + /** + * Resolves bucket symlinks. Read permission is required for following links. + * + * @param volumeAndBucket the bucket to be resolved (if it is a link) + * @param {@link OMClientRequest} which has information required to check + * permission. + * @param visited collects link buckets visited during the resolution to + * avoid infinite loops + * @return bucket location possibly updated with its actual volume and bucket + * after following bucket links + * @throws IOException (most likely OMException) if ACL check fails, bucket is + * not found, loop is detected in the links, etc. + */ + private Pair resolveBucketLink( + Pair volumeAndBucket, + Set> visited, + OMClientRequest omClientRequest) throws IOException { Review comment: Made few changes to not to duplicate code. ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAWithData.java ## @@ -69,6 +70,38 @@ public void testAllOMNodesRunning() throws Exception { createKeyTest(true); } + @Test + public void testBucketLinkOps() throws Exception { Review comment: Thanks for the tip. Enabled docker-compose tests. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -3523,6 +3533,47 @@ public ResolvedBucket resolveBucketLink(Pair requested) visited); } + /** + * Resolves bucket symlinks. Read permission is required for following links. + * + * @param volumeAndBucket the bucket to be resolved (if it is a link) + * @param {@link OMClientRequest} which has information required to check + * permission. + * @param
[GitHub] [hadoop-ozone] lokeshj1703 commented on a change in pull request #1415: HDDS-4232. Use single thread for KeyDeletingService and OpenKeyCleanupService.
lokeshj1703 commented on a change in pull request #1415: URL: https://github.com/apache/hadoop-ozone/pull/1415#discussion_r486905798 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on a change in pull request #1414: HDDS-4231. Background Service blocks on task results.
lokeshj1703 commented on a change in pull request #1414: URL: https://github.com/apache/hadoop-ozone/pull/1414#discussion_r486841264 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. This is an automated message from the Apache Git
[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1414: HDDS-4231. Background Service blocks on task results.
lokeshj1703 commented on pull request #1414: URL: https://github.com/apache/hadoop-ozone/pull/1414#issuecomment-690939847 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek merged pull request #1336: HDDS-4119. Improve performance of the BufferPool management of Ozone client
elek merged pull request #1336: URL: https://github.com/apache/hadoop-ozone/pull/1336 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng closed pull request #1340: HDDS-3188 Add failover proxy for SCM block location.
timmylicheng closed pull request #1340: URL: https://github.com/apache/hadoop-ozone/pull/1340 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1415: HDDS-4232. Use single thread for KeyDeletingService.
amaliujia commented on a change in pull request #1415: URL: https://github.com/apache/hadoop-ozone/pull/1415#discussion_r487185841 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.
amaliujia commented on a change in pull request #1412: URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r486775970 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -174,6 +184,20 @@ public OzoneBucket(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: Out of curiosity: what is the purpose of `@SuppressWarnings("parameternumber")`? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java ## @@ -464,6 +469,8 @@ public void createBucket( .setStorageType(storageType) .setSourceVolume(bucketArgs.getSourceVolume()) .setSourceBucket(bucketArgs.getSourceBucket()) +.setQuotaInBytes(quotaInBytes) +.setQuotaInCounts(quotaInCounts) Review comment: Do you need to verify whether `quotaInBytes` and `quotaInCounts` are valid? e.g. >= 0? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -174,6 +184,20 @@ public OzoneBucket(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: Out of curiosity: what is the purpose of `@SuppressWarnings("parameternumber")`? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java ## @@ -464,6 +469,8 @@ public void createBucket( .setStorageType(storageType) .setSourceVolume(bucketArgs.getSourceVolume()) .setSourceBucket(bucketArgs.getSourceBucket()) +.setQuotaInBytes(quotaInBytes) +.setQuotaInCounts(quotaInCounts) Review comment: Do you need to verify whether `quotaInBytes` and `quotaInCounts` are valid? e.g. >= 0? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -174,6 +184,20 @@ public OzoneBucket(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: Out of curiosity: what is the purpose of `@SuppressWarnings("parameternumber")`? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java ## @@ -464,6 +469,8 @@ public void createBucket( .setStorageType(storageType) .setSourceVolume(bucketArgs.getSourceVolume()) .setSourceBucket(bucketArgs.getSourceBucket()) +.setQuotaInBytes(quotaInBytes) +.setQuotaInCounts(quotaInCounts) Review comment: Do you need to verify whether `quotaInBytes` and `quotaInCounts` are valid? e.g. >= 0? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -174,6 +184,20 @@ public OzoneBucket(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: Out of curiosity: what is the purpose of `@SuppressWarnings("parameternumber")`? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java ## @@ -464,6 +469,8 @@ public void createBucket( .setStorageType(storageType) .setSourceVolume(bucketArgs.getSourceVolume()) .setSourceBucket(bucketArgs.getSourceBucket()) +.setQuotaInBytes(quotaInBytes) +.setQuotaInCounts(quotaInCounts) Review comment: Do you need to verify whether `quotaInBytes` and `quotaInCounts` are valid? e.g. >= 0? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -174,6 +184,20 @@ public OzoneBucket(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: Out of curiosity: what is the purpose of `@SuppressWarnings("parameternumber")`? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java ## @@ -464,6 +469,8 @@ public void createBucket( .setStorageType(storageType) .setSourceVolume(bucketArgs.getSourceVolume()) .setSourceBucket(bucketArgs.getSourceBucket()) +.setQuotaInBytes(quotaInBytes) +.setQuotaInCounts(quotaInCounts) Review comment: Do you need to verify whether `quotaInBytes` and `quotaInCounts` are valid? e.g. >= 0? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -174,6 +184,20 @@ public OzoneBucket(ConfigurationSource
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1296: HDDS-4053. Volume space: add quotaUsageInBytes and update it when write and delete key.
xiaoyuyao commented on a change in pull request #1296: URL: https://github.com/apache/hadoop-ozone/pull/1296#discussion_r486775263 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] amaliujia commented on pull request #1410: HDDS-3102. ozone getconf command should use the GenericCli parent class
amaliujia commented on pull request #1410: URL: https://github.com/apache/hadoop-ozone/pull/1410#issuecomment-691320364 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lamber-ken removed a comment on pull request #1417: HDDS-4324. Add important comment to ListVolumes logic
lamber-ken removed a comment on pull request #1417: URL: https://github.com/apache/hadoop-ozone/pull/1417#issuecomment-690928333 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1361: HDDS-4155. Directory and filename can end up with same name in a path.
bharatviswa504 commented on a change in pull request #1361: URL: https://github.com/apache/hadoop-ozone/pull/1361#discussion_r487246329 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test
[GitHub] [hadoop-ozone] timmylicheng commented on pull request #1340: HDDS-3188 Add failover proxy for SCM block location.
timmylicheng commented on pull request #1340: URL: https://github.com/apache/hadoop-ozone/pull/1340#issuecomment-690835337 > The client failover logic is based on the suggested leader sent by SCM. The `String` value of suggested leader sent by SCM Server is `RaftPeer#getAddress`, but at client side this value is compare with `SCM_DUMMY_NODEID_PREFIX + ` which will never match. So suggested leader is never valued and we are always failing over to the next proxy in round robin. I have removed suggestedLeader related parts in this PR until we think it thru. Thanks for the head out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #1340: HDDS-3188 Add failover proxy for SCM block location.
timmylicheng commented on a change in pull request #1340: URL: https://github.com/apache/hadoop-ozone/pull/1340#discussion_r486736129 ## File path: hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/proxy/SCMBlockLocationFailoverProxyProvider.java ## @@ -0,0 +1,281 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hdds.scm.proxy; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hdds.conf.ConfigurationSource; +import org.apache.hadoop.hdds.scm.ScmConfigKeys; +import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol; +import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB; +import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.io.retry.RetryPolicy; +import org.apache.hadoop.io.retry.RetryPolicy.RetryAction; +import org.apache.hadoop.ipc.ProtobufRpcEngine; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.net.NetUtils; +import org.apache.hadoop.security.UserGroupInformation; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.Closeable; +import java.io.IOException; +import java.net.InetSocketAddress; +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_NAMES; +import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_SERVICE_IDS_KEY; +import static org.apache.hadoop.hdds.HddsUtils.getScmAddressForBlockClients; +import static org.apache.hadoop.hdds.HddsUtils.getPortNumberFromConfigKeys; +import static org.apache.hadoop.hdds.HddsUtils.getHostName; + +/** + * Failover proxy provider for SCM. + */ +public class SCMBlockLocationFailoverProxyProvider implements +FailoverProxyProvider, Closeable { + public static final Logger LOG = + LoggerFactory.getLogger(SCMBlockLocationFailoverProxyProvider.class); + + private Map> scmProxies; + private Map scmProxyInfoMap; + private List scmNodeIDList; + + private String currentProxySCMNodeId; + private int currentProxyIndex; + + private final ConfigurationSource conf; + private final long scmVersion; + + private final String scmServiceId; + + private String lastAttemptedLeader; + + private final int maxRetryCount; + private final long retryInterval; + + public static final String SCM_DUMMY_NODEID_PREFIX = "scm"; + + public SCMBlockLocationFailoverProxyProvider(ConfigurationSource conf) { +this.conf = conf; +this.scmVersion = RPC.getProtocolVersion(ScmBlockLocationProtocol.class); +this.scmServiceId = conf.getTrimmed(OZONE_SCM_SERVICE_IDS_KEY); +this.scmProxies = new HashMap<>(); +this.scmProxyInfoMap = new HashMap<>(); +this.scmNodeIDList = new ArrayList<>(); +loadConfigs(); + +this.currentProxyIndex = 0; +currentProxySCMNodeId = scmNodeIDList.get(currentProxyIndex); + +this.maxRetryCount = conf.getObject(SCMBlockClientConfig.class) +.getRetryCount(); +this.retryInterval = conf.getObject(SCMBlockClientConfig.class) Review comment: Updated This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1415: HDDS-4232. Use single thread for KeyDeletingService and OpenKeyCleanupService.
lokeshj1703 commented on pull request #1415: URL: https://github.com/apache/hadoop-ozone/pull/1415#issuecomment-690940004 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lamber-ken commented on pull request #1417: HDDS-4324. Add important comment to ListVolumes logic
lamber-ken commented on pull request #1417: URL: https://github.com/apache/hadoop-ozone/pull/1417#issuecomment-690928333 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bshashikant merged pull request #1408: HDDS-4217. Remove test TestOzoneContainerRatis
bshashikant merged pull request #1408: URL: https://github.com/apache/hadoop-ozone/pull/1408 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] runzhiwang commented on pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes
runzhiwang commented on pull request #1371: URL: https://github.com/apache/hadoop-ozone/pull/1371#issuecomment-690911218 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1405: HDDS-4143. Implement a factory for OM Requests that returns an instance based on layout version.
avijayanhwx commented on pull request #1405: URL: https://github.com/apache/hadoop-ozone/pull/1405#issuecomment-691330372 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bshashikant merged pull request #1409: HDDS-4218. Remove test TestRatisManager
bshashikant merged pull request #1409: URL: https://github.com/apache/hadoop-ozone/pull/1409 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] ChenSammi merged pull request #1337: HDDS-4129. change MAX_QUOTA_IN_BYTES to Long.MAX_VALUE.
ChenSammi merged pull request #1337: URL: https://github.com/apache/hadoop-ozone/pull/1337 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #1396: HDDS-4150. recon.api.TestEndpoints test is flaky
elek commented on pull request #1396: URL: https://github.com/apache/hadoop-ozone/pull/1396#issuecomment-691426288 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1337: HDDS-4129. change MAX_QUOTA_IN_BYTES to Long.MAX_VALUE.
ChenSammi commented on pull request #1337: URL: https://github.com/apache/hadoop-ozone/pull/1337#issuecomment-690919896 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #1336: HDDS-4119. Improve performance of the BufferPool management of Ozone client
elek commented on pull request #1336: URL: https://github.com/apache/hadoop-ozone/pull/1336#issuecomment-691125675 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng merged pull request #1413: HDDS-4228: add field 'num' to ALLOCATE_BLOCK of scm audit log.
timmylicheng merged pull request #1413: URL: https://github.com/apache/hadoop-ozone/pull/1413 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #1410: HDDS-3102. ozone getconf command should use the GenericCli parent class
elek commented on pull request #1410: URL: https://github.com/apache/hadoop-ozone/pull/1410#issuecomment-690929244 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1398: HDDS-4210. ResolveBucket during checkAcls fails.
bharatviswa504 commented on a change in pull request #1398: URL: https://github.com/apache/hadoop-ozone/pull/1398#discussion_r487178258 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -3523,6 +3533,47 @@ public ResolvedBucket resolveBucketLink(Pair requested) visited); } + /** + * Resolves bucket symlinks. Read permission is required for following links. + * + * @param volumeAndBucket the bucket to be resolved (if it is a link) + * @param {@link OMClientRequest} which has information required to check + * permission. + * @param visited collects link buckets visited during the resolution to + * avoid infinite loops + * @return bucket location possibly updated with its actual volume and bucket + * after following bucket links + * @throws IOException (most likely OMException) if ACL check fails, bucket is + * not found, loop is detected in the links, etc. + */ + private Pair resolveBucketLink( + Pair volumeAndBucket, + Set> visited, + OMClientRequest omClientRequest) throws IOException { Review comment: Made few changes to not to duplicate code. ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAWithData.java ## @@ -69,6 +70,38 @@ public void testAllOMNodesRunning() throws Exception { createKeyTest(true); } + @Test + public void testBucketLinkOps() throws Exception { Review comment: Thanks for the tip. Enabled docker-compose tests. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java ## @@ -3523,6 +3533,47 @@ public ResolvedBucket resolveBucketLink(Pair requested) visited); } + /** + * Resolves bucket symlinks. Read permission is required for following links. + * + * @param volumeAndBucket the bucket to be resolved (if it is a link) + * @param {@link OMClientRequest} which has information required to check + * permission. + * @param visited collects link buckets visited during the resolution to + * avoid infinite loops + * @return bucket location possibly updated with its actual volume and bucket + * after following bucket links + * @throws IOException (most likely OMException) if ACL check fails, bucket is + * not found, loop is detected in the links, etc. + */ + private Pair resolveBucketLink( + Pair volumeAndBucket, + Set> visited, + OMClientRequest omClientRequest) throws IOException { Review comment: Made few changes to not to duplicate code. ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAWithData.java ## @@ -69,6 +70,38 @@ public void testAllOMNodesRunning() throws Exception { createKeyTest(true); } + @Test + public void testBucketLinkOps() throws Exception { Review comment: Thanks for the tip. Enabled docker-compose tests. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on a change in pull request #1415: HDDS-4232. Use single thread for KeyDeletingService and OpenKeyCleanupService.
lokeshj1703 commented on a change in pull request #1415: URL: https://github.com/apache/hadoop-ozone/pull/1415#discussion_r486905798 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Addressed it in latest commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on a change in pull request #1414: HDDS-4231. Background Service blocks on task results.
lokeshj1703 commented on a change in pull request #1414: URL: https://github.com/apache/hadoop-ozone/pull/1414#discussion_r486841264 ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. ## File path: hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/utils/BackgroundService.java ## @@ -62,11 +56,11 @@ public BackgroundService(String serviceName, long interval, this.interval = interval; this.unit = unit; this.serviceName = serviceName; -this.serviceTimeout = serviceTimeout; +this.serviceTimeoutInNanos = TimeDuration.valueOf(serviceTimeout, unit) +.toLong(TimeUnit.NANOSECONDS); Review comment: I think we are ok with long even for nanoseconds. long can support a value of ~292 years denoted as nanoseconds. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek merged pull request #1336: HDDS-4119. Improve performance of the BufferPool management of Ozone client
elek merged pull request #1336: URL: https://github.com/apache/hadoop-ozone/pull/1336 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1411: HDDS-4097. [DESIGN] S3/Ozone Filesystem inter-op
xiaoyuyao commented on a change in pull request #1411: URL: https://github.com/apache/hadoop-ozone/pull/1411#discussion_r486757610 ## File path: hadoop-hdds/docs/content/design/s3_hcfs.md ## @@ -0,0 +1,280 @@ +--- +title: S3/Ozone Filesystem inter-op +summary: How to support both S3 and HCFS and the same time +date: 2020-09-09 +jira: HDDS-4097 +status: draft +author: Marton Elek, +--- + + +# Ozone S3 vs file-system semantics + +Ozone is an object-store for Hadoop ecosystem which can be used from multiple interfaces: + + 1. From Hadoop Compatible File Systems (will be called as *HCFS* in the remaining of this document) (RPC) + 2. From S3 compatible applications (REST) + 3. From container orchestrator as mounted volume (CSI, alpha feature) + +As Ozone is an object store it stores key and values in a flat hierarchy which is enough to support S3 (2). But to support Hadoop Compatible File System (and CSI), Ozone should simulated file system hierarchy. + +There are multiple challenges when file system hierarchy is simulated by a flat namespace: + + 1. Some key patterns couldn't be easily transformed to file system path (e.g. `/a/b/../c`, `/a/b//d`, or a real key with directory path like `/b/d/`) + 2. Directory entries (which may have own properties) require special handling as file system interface requires a dir entry even if it's not created explicitly (for example if key `/a/b/c` is created `/a/b` supposed to be a visible directory entry for file system interface) + 3. Non-recursive listing of directories can be hard (Listing direct entries under `/a` should ignore all the `/a/b/...`, `/a/b/c/...` keys) + 4. Similar to listing, rename can be a costly operation as it requires to rename many keys (renaming a first level directory means a rename of all the keys with the same prefix) + +See also the [Hadoop S3A documentation](https://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Introducing_the_Hadoop_S3A_client) which describes some of these problem when AWS S3 is used. (*Warnings* section) + +# Current status + +As of today *Ozone Manager* has two different interfaces (both are defined in `OmClientProtocol.proto`): + + 1. object store related functions (like *CreateKey*, *LookupKey*, *DeleteKey*,...) + 2. file system related functions (like *CreateFile*, *LookupFile*,...) + +File system related functions uses the same flat hierarchy under the hood but includes additional functionalities. For example the `createFile` call creates all the intermediate directories for a specific key (create file `/a/b/c` will create `/a/b` and `/a` entries in the key space) + +Today, if a key is created from the S3 interface can cause exceptions if the intermediate directories are checked from HCFS: + + +```shell +$ aws s3api put-object --endpoint http://localhost:9878 --bucket bucket1 --key /a/b/c/d + +$ ozone fs -ls o3fs://bucket1.s3v/a/ +ls: `o3fs://bucket1.s3v/a/': No such file or directory +``` + +This problem is reported in [HDDS-3955](https://issues.apache.org/jira/browse/HDDS-3955), where a new configuration key is introduced (`ozone.om.enable.filesystem.paths`). If this is enabled, intermediate directories are created even if the object store interface is used. + +This configuration is turned off by default, which means that S3 and HCFS couldn't be used together. + +To solve the performance problems of the directory listing / rename, [HDDS-2939](https://issues.apache.org/jira/browse/HDDS-2939) is created, which propose to use a new prefix table to store the "directory" entries (=prefixes). + +[HDDS-4097](https://issues.apache.org/jira/browse/HDDS-4097) is created to normalize the key names based on file-system semantics if `ozone.om.enable.filesystem.paths` is enabled. But please note that `ozone.om.enable.filesystem.paths` should always be turned on if S3 and HCFS are both used. It means that if both S3 and HCFS are used, normalization is forced, and S3 interface is not fully AWS S3 compatible. There is no option to use HCFS and S3 but with full AWS compatibility (and reduced HCFS compatibility). + +# Goals + + * Out of the box Ozone should support both S3 and HCFS interfaces without any settings. (It's possible only for the regular, fs compatible key names) + * As 100% compatibility couldn't be achieved on both side we need a configuration to set the expectations for incompatible key names + * Default behavior of `o3fs` and `ofs` should be as close to `s3a` as possible (when s3 compatibilty is prefered) + +# Possible cases to support + +There are two main aspects of supporting both `ofs/o3fs` and `s3` together: + + 1. `ofs/o3fs` require to create intermediate directory entries (for example `/a/b` for the key `/b/c/c`) Review comment: /b/c/c => /a/b/c ## File path: hadoop-hdds/docs/content/design/s3_hcfs.md ## @@ -0,0 +1,280 @@ +--- +title: S3/Ozone Filesystem inter-op +summary: How to support both S3
[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1415: HDDS-4232. Use single thread for KeyDeletingService.
amaliujia commented on a change in pull request #1415: URL: https://github.com/apache/hadoop-ozone/pull/1415#discussion_r487185841 ## File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyDeletingService.java ## @@ -66,9 +66,6 @@ private static final Logger LOG = LoggerFactory.getLogger(KeyDeletingService.class); - // The thread pool size for key deleting service. - private final static int KEY_DELETING_CORE_POOL_SIZE = 2; Review comment: Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng closed pull request #1340: HDDS-3188 Add failover proxy for SCM block location.
timmylicheng closed pull request #1340: URL: https://github.com/apache/hadoop-ozone/pull/1340 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.
amaliujia commented on a change in pull request #1412: URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r486775970 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java ## @@ -174,6 +184,20 @@ public OzoneBucket(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: Out of curiosity: what is the purpose of `@SuppressWarnings("parameternumber")`? ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java ## @@ -464,6 +469,8 @@ public void createBucket( .setStorageType(storageType) .setSourceVolume(bucketArgs.getSourceVolume()) .setSourceBucket(bucketArgs.getSourceBucket()) +.setQuotaInBytes(quotaInBytes) +.setQuotaInCounts(quotaInCounts) Review comment: Do you need to verify whether `quotaInBytes` and `quotaInCounts` are valid? e.g. >= 0? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1414: HDDS-4231. Background Service blocks on task results.
lokeshj1703 commented on pull request #1414: URL: https://github.com/apache/hadoop-ozone/pull/1414#issuecomment-690939847 Pushed another commit to fix a unit test. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1296: HDDS-4053. Volume space: add quotaUsageInBytes and update it when write and delete key.
xiaoyuyao commented on a change in pull request #1296: URL: https://github.com/apache/hadoop-ozone/pull/1296#discussion_r486775263 ## File path: hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java ## @@ -131,6 +133,18 @@ public OzoneVolume(ConfigurationSource conf, ClientProtocol proxy, this.modificationTime = Instant.ofEpochMilli(modificationTime); } + @SuppressWarnings("parameternumber") Review comment: sounds good to me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] amaliujia commented on pull request #1410: HDDS-3102. ozone getconf command should use the GenericCli parent class
amaliujia commented on pull request #1410: URL: https://github.com/apache/hadoop-ozone/pull/1410#issuecomment-691320364 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lamber-ken removed a comment on pull request #1417: HDDS-4324. Add important comment to ListVolumes logic
lamber-ken removed a comment on pull request #1417: URL: https://github.com/apache/hadoop-ozone/pull/1417#issuecomment-690928333 help This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1361: HDDS-4155. Directory and filename can end up with same name in a path.
bharatviswa504 commented on a change in pull request #1361: URL: https://github.com/apache/hadoop-ozone/pull/1361#discussion_r487246329 ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test ## File path: hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestOzoneFSWithObjectStoreCreate.java ## @@ -303,6 +304,12 @@ public void testMPUFailDuetoDirectoryCreationBeforeComplete() } + @Test(expected = FileAlreadyExistsException.class) Review comment: Added a test with object store API for the same scenario This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on pull request #1413: HDDS-4228: add field 'num' to ALLOCATE_BLOCK of scm audit log.
timmylicheng commented on pull request #1413: URL: https://github.com/apache/hadoop-ozone/pull/1413#issuecomment-690833613 Merging This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on pull request #1340: HDDS-3188 Add failover proxy for SCM block location.
timmylicheng commented on pull request #1340: URL: https://github.com/apache/hadoop-ozone/pull/1340#issuecomment-690835337 > The client failover logic is based on the suggested leader sent by SCM. The `String` value of suggested leader sent by SCM Server is `RaftPeer#getAddress`, but at client side this value is compare with `SCM_DUMMY_NODEID_PREFIX + ` which will never match. So suggested leader is never valued and we are always failing over to the next proxy in round robin. I have removed suggestedLeader related parts in this PR until we think it thru. Thanks for the head out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #1340: HDDS-3188 Add failover proxy for SCM block location.
timmylicheng commented on a change in pull request #1340: URL: https://github.com/apache/hadoop-ozone/pull/1340#discussion_r486736129 ## File path: hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/scm/proxy/SCMBlockLocationFailoverProxyProvider.java ## @@ -0,0 +1,281 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.hdds.scm.proxy; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.hdds.conf.ConfigurationSource; +import org.apache.hadoop.hdds.scm.ScmConfigKeys; +import org.apache.hadoop.hdds.scm.protocol.ScmBlockLocationProtocol; +import org.apache.hadoop.hdds.scm.protocolPB.ScmBlockLocationProtocolPB; +import org.apache.hadoop.hdds.utils.LegacyHadoopConfigurationSource; +import org.apache.hadoop.io.retry.FailoverProxyProvider; +import org.apache.hadoop.io.retry.RetryPolicy; +import org.apache.hadoop.io.retry.RetryPolicy.RetryAction; +import org.apache.hadoop.ipc.ProtobufRpcEngine; +import org.apache.hadoop.ipc.RPC; +import org.apache.hadoop.net.NetUtils; +import org.apache.hadoop.security.UserGroupInformation; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.io.Closeable; +import java.io.IOException; +import java.net.InetSocketAddress; +import java.util.ArrayList; +import java.util.Collection; +import java.util.HashMap; +import java.util.List; +import java.util.Map; +import java.util.Optional; + +import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_NAMES; +import static org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_SERVICE_IDS_KEY; +import static org.apache.hadoop.hdds.HddsUtils.getScmAddressForBlockClients; +import static org.apache.hadoop.hdds.HddsUtils.getPortNumberFromConfigKeys; +import static org.apache.hadoop.hdds.HddsUtils.getHostName; + +/** + * Failover proxy provider for SCM. + */ +public class SCMBlockLocationFailoverProxyProvider implements +FailoverProxyProvider, Closeable { + public static final Logger LOG = + LoggerFactory.getLogger(SCMBlockLocationFailoverProxyProvider.class); + + private Map> scmProxies; + private Map scmProxyInfoMap; + private List scmNodeIDList; + + private String currentProxySCMNodeId; + private int currentProxyIndex; + + private final ConfigurationSource conf; + private final long scmVersion; + + private final String scmServiceId; + + private String lastAttemptedLeader; + + private final int maxRetryCount; + private final long retryInterval; + + public static final String SCM_DUMMY_NODEID_PREFIX = "scm"; + + public SCMBlockLocationFailoverProxyProvider(ConfigurationSource conf) { +this.conf = conf; +this.scmVersion = RPC.getProtocolVersion(ScmBlockLocationProtocol.class); +this.scmServiceId = conf.getTrimmed(OZONE_SCM_SERVICE_IDS_KEY); +this.scmProxies = new HashMap<>(); +this.scmProxyInfoMap = new HashMap<>(); +this.scmNodeIDList = new ArrayList<>(); +loadConfigs(); + +this.currentProxyIndex = 0; +currentProxySCMNodeId = scmNodeIDList.get(currentProxyIndex); + +this.maxRetryCount = conf.getObject(SCMBlockClientConfig.class) +.getRetryCount(); +this.retryInterval = conf.getObject(SCMBlockClientConfig.class) Review comment: Updated This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1415: HDDS-4232. Use single thread for KeyDeletingService and OpenKeyCleanupService.
lokeshj1703 commented on pull request #1415: URL: https://github.com/apache/hadoop-ozone/pull/1415#issuecomment-690940004 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] lamber-ken commented on pull request #1417: HDDS-4324. Add important comment to ListVolumes logic
lamber-ken commented on pull request #1417: URL: https://github.com/apache/hadoop-ozone/pull/1417#issuecomment-690928333 help This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-4236) Move "Om*Codec.java" to new project hadoop-ozone/interface-storage
Rui Wang created HDDS-4236: -- Summary: Move "Om*Codec.java" to new project hadoop-ozone/interface-storage Key: HDDS-4236 URL: https://issues.apache.org/jira/browse/HDDS-4236 Project: Hadoop Distributed Data Store Issue Type: Sub-task Reporter: Rui Wang Assignee: Rui Wang This is the first step to separate storage and RPC proto files. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org
[GitHub] [hadoop-ozone] elek commented on pull request #1396: HDDS-4150. recon.api.TestEndpoints test is flaky
elek commented on pull request #1396: URL: https://github.com/apache/hadoop-ozone/pull/1396#issuecomment-691426288 Sorry, I am late in the game to review it, but it shouldn't be too late to say: thanks a lot to fix this flakiness. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org