[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1507: HDDS-4307.Start Background Service for Trash Deletion in Ozone Manager

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1507:
URL: https://github.com/apache/hadoop-ozone/pull/1507#discussion_r509870897



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashDeletingService.java
##
@@ -0,0 +1,114 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Trash;
+import org.apache.hadoop.hdds.utils.BackgroundService;
+import org.apache.hadoop.hdds.utils.BackgroundTask;
+import org.apache.hadoop.hdds.utils.BackgroundTaskQueue;
+import org.apache.hadoop.hdds.utils.BackgroundTaskResult;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.security.SecurityUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import java.util.concurrent.TimeUnit;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+
+/**
+ * Background Service to empty keys that are moved to Trash.
+ */
+public class TrashDeletingService extends BackgroundService {
+
+private static final Logger LOG =
+LoggerFactory.getLogger(TrashDeletingService.class);
+
+// Use single thread for  now
+private final static int KEY_DELETING_CORE_POOL_SIZE = 1;
+
+private OzoneManager ozoneManager;
+
+public void setFsConf(Configuration fsConf) {
+this.fsConf = fsConf;
+}
+
+private Configuration fsConf;
+
+public TrashDeletingService(long interval, long 
serviceTimeout,OzoneManager ozoneManager) {
+super("TrashDeletingService", interval, TimeUnit.MILLISECONDS, 
KEY_DELETING_CORE_POOL_SIZE, serviceTimeout);
+this.ozoneManager = ozoneManager;
+fsConf = new Configuration();

Review comment:
   I saw you are setting 
`fsConf.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, rootPath);` to 
get FS. Is that the reason for creating `new Configuration()` instead of using 
`ozoneManager.getConfiguration()` ?
   
   If yes, can you please add comments mentioning the reason.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1507: HDDS-4307.Start Background Service for Trash Deletion in Ozone Manager

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1507:
URL: https://github.com/apache/hadoop-ozone/pull/1507#discussion_r509870311



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1228,17 +1236,34 @@ public void restart() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-
 omRpcServer.start();
+
 isOmRpcServerRunning = true;
 
+startTrashDeletingService();
+
 registerMXBean();
 
 startJVMPauseMonitor();
 setStartTime();
 omState = State.RUNNING;
   }
 
+  private void startTrashDeletingService() {
+if (trashDeletingService == null) {
+  long serviceTimeout = configuration.getTimeDuration(
+  OZONE_TRASH_DELETING_SERVICE_TIMEOUT,
+  OZONE_TRASH_DELETING_SERVICE_TIMEOUT_DEFAULT,
+  TimeUnit.MILLISECONDS);
+  long trashDeletionInterval = configuration.getTimeDuration(
+  OZONE_TRASH_DELETING_SERVICE_INTERVAL,
+  OZONE_TRASH_DELETING_SERVICE_INTERVAL_DEFAULT,
+  TimeUnit.MILLISECONDS);
+  trashDeletingService = new TrashDeletingService(trashDeletionInterval, 
serviceTimeout, this);
+  trashDeletingService.start();

Review comment:
   Please shutdown the `trashDeletingService` during OM stop.

##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##
@@ -1170,4 +1172,50 @@ public void testFileDelete() throws Exception {
 Boolean falseResult = fs.delete(parent, true);
 assertFalse(falseResult);
   }
+
+  /**
+   * @throws Exception
+   * 1.Move a Key to Trash
+   * 2.Start TrashDeletingService
+   * 3.Verify that the TrashDeletingService purges the key after minimum set 
TrashInterval of 1 min.
+   */
+  @Test
+  public void testTrashDeletingService() throws Exception {
+String testKeyName = "keyToBeDeleted";
+Path path = new Path(bucketPath, testKeyName);
+try (FSDataOutputStream stream = fs.create(path)) {
+  stream.write(1);
+}
+// Call moveToTrash. We can't call protected fs.rename() directly
+trash.moveToTrash(path);
+TrashDeletingService trashDeletingService = new
+TrashDeletingService(60,300,cluster.getOzoneManager());
+conf.setLong(FS_TRASH_INTERVAL_KEY,1);
+trashDeletingService.setFsConf(conf);
+trashDeletingService.start();
+
+
+// Construct paths
+String username = UserGroupInformation.getCurrentUser().getShortUserName();
+Path trashRoot = new Path(bucketPath, TRASH_PREFIX);
+Path userTrash = new Path(trashRoot, username);
+Path userTrashCurrent = new Path(userTrash, "Current");
+String key = path.toString().substring(1);
+Path trashPath = new Path(userTrashCurrent, key);
+
+// Wait until the TrashDeletingService purges the key
+GenericTestUtils.waitFor(()-> {
+  try {
+return !ofs.exists(trashPath);
+  } catch (IOException e) {
+LOG.error("Delete from Trash Failed");
+Assert.fail();

Review comment:
   Please add failure message -> Assert.fail("Delete from Trash Failed");

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/TrashDeletingService.java
##
@@ -0,0 +1,114 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Trash;
+import org.apache.hadoop.hdds.utils.BackgroundService;
+import org.apache.hadoop.hdds.utils.BackgroundTask;
+import org.apache.hadoop.hdds.utils.BackgroundTaskQueue;
+import org.apache.hadoop.hdds.utils.BackgroundTaskResult;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.security.SecurityUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import java.util.concurrent.TimeUnit;
+
+import static 

[GitHub] [hadoop-ozone] prashantpogde commented on pull request #1507: HDDS-4307.Start Background Service for Trash Deletion in Ozone Manager

2020-10-21 Thread GitBox


prashantpogde commented on pull request #1507:
URL: https://github.com/apache/hadoop-ozone/pull/1507#issuecomment-714205605


   Can you address the CI failures ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4365:
-
Labels: pull-request-available  (was: )

> SCMBlockLocationFailoverProxyProvider should use 
> ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
> --
>
> Key: HDDS-4365
> URL: https://issues.apache.org/jira/browse/HDDS-4365
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Minor
>  Labels: pull-request-available
>
> in SCMBlockLocationFailoverProxyProvider,
> currently it is
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  it should be 
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  
> FYi, according to non-HA version
> {code:java}
> private static ScmBlockLocationProtocol getScmBlockClient(
> OzoneConfiguration conf) throws IOException {
>   RPC.setProtocolEngine(conf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   long scmVersion =
>   RPC.getProtocolVersion(ScmBlockLocationProtocolPB.class);
>   InetSocketAddress scmBlockAddress =
>   getScmAddressForBlockClients(conf);
>   ScmBlockLocationProtocolClientSideTranslatorPB scmBlockLocationClient =
>   new ScmBlockLocationProtocolClientSideTranslatorPB(
>   RPC.getProxy(ScmBlockLocationProtocolPB.class, scmVersion,
>   scmBlockAddress, UserGroupInformation.getCurrentUser(), conf,
>   NetUtils.getDefaultSocketFactory(conf),
>   Client.getRpcTimeout(conf)));
>   return TracingUtil
>   .createProxy(scmBlockLocationClient, ScmBlockLocationProtocol.class,
>   conf);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng opened a new pull request #1512: HDDS-4365: SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread GitBox


GlenGeng opened a new pull request #1512:
URL: https://github.com/apache/hadoop-ozone/pull/1512


   ## What changes were proposed in this pull request?
   
   In SCMBlockLocationFailoverProxyProvider, currently it is
   ```
   private ScmBlockLocationProtocolPB createSCMProxy(
   InetSocketAddress scmAddress) throws IOException {
 ...
 RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
 ProtobufRpcEngine.class);
 ...
   ```
   
   it should be 
   ```
   private ScmBlockLocationProtocolPB createSCMProxy(
   InetSocketAddress scmAddress) throws IOException {
 ...
 RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
 ProtobufRpcEngine.class);
 ...
   ```
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4365
   
   ## How was this patch tested?
   
   CI
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread Glen Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Geng updated HDDS-4365:

Description: 
in SCMBlockLocationFailoverProxyProvider,

currently it is
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
  ProtobufRpcEngine.class);
  ...{code}
 it should be 
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
  ProtobufRpcEngine.class);
  ...{code}
 

FYi, according to non-HA version
{code:java}
private static ScmBlockLocationProtocol getScmBlockClient(
OzoneConfiguration conf) throws IOException {
  RPC.setProtocolEngine(conf, ScmBlockLocationProtocolPB.class,
  ProtobufRpcEngine.class);
  long scmVersion =
  RPC.getProtocolVersion(ScmBlockLocationProtocolPB.class);
  InetSocketAddress scmBlockAddress =
  getScmAddressForBlockClients(conf);
  ScmBlockLocationProtocolClientSideTranslatorPB scmBlockLocationClient =
  new ScmBlockLocationProtocolClientSideTranslatorPB(
  RPC.getProxy(ScmBlockLocationProtocolPB.class, scmVersion,
  scmBlockAddress, UserGroupInformation.getCurrentUser(), conf,
  NetUtils.getDefaultSocketFactory(conf),
  Client.getRpcTimeout(conf)));
  return TracingUtil
  .createProxy(scmBlockLocationClient, ScmBlockLocationProtocol.class,
  conf);
}
{code}

  was:
in SCMBlockLocationFailoverProxyProvider, currently it is
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
  ProtobufRpcEngine.class);
  ...{code}
 it should be 
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
  ProtobufRpcEngine.class);
  ...{code}
 


> SCMBlockLocationFailoverProxyProvider should use 
> ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
> --
>
> Key: HDDS-4365
> URL: https://issues.apache.org/jira/browse/HDDS-4365
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Minor
>
> in SCMBlockLocationFailoverProxyProvider,
> currently it is
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  it should be 
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  
> FYi, according to non-HA version
> {code:java}
> private static ScmBlockLocationProtocol getScmBlockClient(
> OzoneConfiguration conf) throws IOException {
>   RPC.setProtocolEngine(conf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   long scmVersion =
>   RPC.getProtocolVersion(ScmBlockLocationProtocolPB.class);
>   InetSocketAddress scmBlockAddress =
>   getScmAddressForBlockClients(conf);
>   ScmBlockLocationProtocolClientSideTranslatorPB scmBlockLocationClient =
>   new ScmBlockLocationProtocolClientSideTranslatorPB(
>   RPC.getProxy(ScmBlockLocationProtocolPB.class, scmVersion,
>   scmBlockAddress, UserGroupInformation.getCurrentUser(), conf,
>   NetUtils.getDefaultSocketFactory(conf),
>   Client.getRpcTimeout(conf)));
>   return TracingUtil
>   .createProxy(scmBlockLocationClient, ScmBlockLocationProtocol.class,
>   conf);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread Glen Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Geng reassigned HDDS-4365:
---

Assignee: Glen Geng

> SCMBlockLocationFailoverProxyProvider should use 
> ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
> --
>
> Key: HDDS-4365
> URL: https://issues.apache.org/jira/browse/HDDS-4365
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Minor
>
> in SCMBlockLocationFailoverProxyProvider, currently it is
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  it should be 
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread Glen Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Geng updated HDDS-4365:

Description: 
in SCMBlockLocationFailoverProxyProvider, currently it is
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
  ProtobufRpcEngine.class);
  ...{code}
 it should be 
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
  ProtobufRpcEngine.class);
  ...{code}
 

  was:
in SCMBlockLocationFailoverProxyProvider, it should be 
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
  ProtobufRpcEngine.class);
  ...{code}
 


> SCMBlockLocationFailoverProxyProvider should use 
> ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
> --
>
> Key: HDDS-4365
> URL: https://issues.apache.org/jira/browse/HDDS-4365
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Priority: Minor
>
> in SCMBlockLocationFailoverProxyProvider, currently it is
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocol.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  it should be 
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread Glen Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Geng updated HDDS-4365:

Description: 
in SCMBlockLocationFailoverProxyProvider, it should be 
{code:java}
private ScmBlockLocationProtocolPB createSCMProxy(
InetSocketAddress scmAddress) throws IOException {
  ...
  RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
  ProtobufRpcEngine.class);
  ...{code}
 

  was:
SCM ServiceManager is going to control all the SCM background service so that 
they are only serving as the leader. 

ServiceManager also would bootstrap all the background services and protocol 
servers. 

It also needs to do validation steps when the SCM is up as the leader.


> SCMBlockLocationFailoverProxyProvider should use 
> ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
> --
>
> Key: HDDS-4365
> URL: https://issues.apache.org/jira/browse/HDDS-4365
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Priority: Minor
>
> in SCMBlockLocationFailoverProxyProvider, it should be 
> {code:java}
> private ScmBlockLocationProtocolPB createSCMProxy(
> InetSocketAddress scmAddress) throws IOException {
>   ...
>   RPC.setProtocolEngine(hadoopConf, ScmBlockLocationProtocolPB.class,
>   ProtobufRpcEngine.class);
>   ...{code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread Glen Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Geng updated HDDS-4365:

Priority: Minor  (was: Major)

> SCMBlockLocationFailoverProxyProvider should use 
> ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
> --
>
> Key: HDDS-4365
> URL: https://issues.apache.org/jira/browse/HDDS-4365
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Priority: Minor
>
> SCM ServiceManager is going to control all the SCM background service so that 
> they are only serving as the leader. 
> ServiceManager also would bootstrap all the background services and protocol 
> servers. 
> It also needs to do validation steps when the SCM is up as the leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4365) SCMBlockLocationFailoverProxyProvider should use ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine

2020-10-21 Thread Glen Geng (Jira)
Glen Geng created HDDS-4365:
---

 Summary: SCMBlockLocationFailoverProxyProvider should use 
ScmBlockLocationProtocolPB.class in RPC.setProtocolEngine
 Key: HDDS-4365
 URL: https://issues.apache.org/jira/browse/HDDS-4365
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Reporter: Glen Geng


SCM ServiceManager is going to control all the SCM background service so that 
they are only serving as the leader. 

ServiceManager also would bootstrap all the background services and protocol 
servers. 

It also needs to do validation steps when the SCM is up as the leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4332) ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread Rakesh Radhakrishnan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Radhakrishnan updated HDDS-4332:
---
Status: Patch Available  (was: In Progress)

> ListFileStatus - do lookup in directory and file tables
> ---
>
> Key: HDDS-4332
> URL: https://issues.apache.org/jira/browse/HDDS-4332
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>  Labels: pull-request-available
>
> This task is to perform look up of the user given {{key}} path in the 
> directory, file and openFile tables.
> OzoneFileSystem APIs:
>      - GetFileStatus
>      - listStatus



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4364) List FileStatus : startKey can be a non-existed path

2020-10-21 Thread Rakesh Radhakrishnan (Jira)
Rakesh Radhakrishnan created HDDS-4364:
--

 Summary: List FileStatus : startKey can be a non-existed path
 Key: HDDS-4364
 URL: https://issues.apache.org/jira/browse/HDDS-4364
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Rakesh Radhakrishnan
Assignee: Rakesh Radhakrishnan


StartKey can be a non-existed key in the {{keyManager#listStatus API. Needs 
special handling to search the path in FileTable and in DirTable.

For Example, OM has namespace like:
{code:java}
/a/a
/a/c
/a/d
{code}
If given {{startKey: "/a/b"}}, here should return {{["/a/c", "/a/d"]}}

[Reference 
comment|https://github.com/apache/hadoop-ozone/pull/1503#discussion_r506857023]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509846900



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2272,167 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+List fileStatusList = new ArrayList<>();
+if (numEntries <= 0) {
+  return fileStatusList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+// A map sorted by OmKey to combine results from TableCache and DB.
+TreeMap cacheKeyMap = new TreeMap<>();
+String seekKeyInDB = "";
+long prefixKeyInDB = Long.MIN_VALUE;
+String prefixPath = keyName;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// keyName is a directory
+if (fileStatus.getKeyInfo() != null) {
+  seekKeyInDB = fileStatus.getKeyInfo().getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  seekKeyInDB = omBucketInfo.getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+  } else {
+// startKey will be used in iterator seek and sets the beginning point
+// for key traversal.
+// key name will be used as parent ID where the user has requested to
+// list the keys from.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null);

Review comment:
   Raised HDDS-4364 jira to handle this case.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde edited a comment on pull request #1486: HDDS-4296. SCM changes to process Layout Info in heartbeat request/response

2020-10-21 Thread GitBox


prashantpogde edited a comment on pull request #1486:
URL: https://github.com/apache/hadoop-ozone/pull/1486#issuecomment-713919301


   Updated with new set of changes after taking care of all review comments. 
Please take a look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde commented on pull request #1486: HDDS-4296. SCM changes to process Layout Info in heartbeat request/response

2020-10-21 Thread GitBox


prashantpogde commented on pull request #1486:
URL: https://github.com/apache/hadoop-ozone/pull/1486#issuecomment-713919301


   Updates new set of changes after taking care of all review comments. Please 
take a look.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4123) Integrate OM Open Key Cleanup Service Into Existing Code

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4123:
-
Labels: pull-request-available  (was: )

> Integrate OM Open Key Cleanup Service Into Existing Code
> 
>
> Key: HDDS-4123
> URL: https://issues.apache.org/jira/browse/HDDS-4123
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>  Labels: pull-request-available
>
> Implement the `OpenKeyCleanupService` class, and start and stop the service 
> in `KeyManagerImpl`. The following configurations will be added to specify 
> the service's behavior:
>  # ozone.open.key.cleanup.service.interval: Frequency the service should run.
>  # ozone.open.key.expire.threshold: Time from creation after which an open 
> key is deemed expired.
>  # ozone.open.key.cleanup.limit.per.task: Maximum number of keys the service 
> can mark for deletion on each run.
> Default values for these configurations will be chosen from HDFS data.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 opened a new pull request #1511: HDDS-4123. Integrate OM Open Key Cleanup Service Into Existing Code

2020-10-21 Thread GitBox


errose28 opened a new pull request #1511:
URL: https://github.com/apache/hadoop-ozone/pull/1511


   ## What changes were proposed in this pull request?
   
   This pull request completes the open key cleanup service outlined in the 
parent Jira HDDS-4120. It implements the `OpenKeyCleanupService` class, and 
starts and stops the service in `KeyManagerImpl`. The following configurations 
have been defined to specify the service's behavior:
   
   1. ozone.open.key.cleanup.service.interval
   2. ozone.open.key.expire.threshold
   3. ozone.open.key.cleanup.limit.per.task
   
   See `ozone-defaults.xml` for their corresponding descriptions. 
Configurations for service interval and expiration threshold were previously 
defined in `OzoneConfigKeys` and done as raw numbers. This feature moves them 
to `OMConfigKeys` and implements them as `TimeDurations`, which requires an 
update to each spot in the code previously referring to these config values 
(mostly in test setups).
   
   ## What is the link to the Apache JIRA
   
   HDDS-4123
   
   ## How was this patch tested?
   
   Integration test has been added.
   
   ## Notes
   
   Default values for configurations still need to be estimated from HDFS JMX 
data. Leaving this pull request as a draft until this step is complete.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4123) Integrate OM Open Key Cleanup Service Into Existing Code

2020-10-21 Thread Ethan Rose (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Rose updated HDDS-4123:
-
Description: 
Implement the `OpenKeyCleanupService` class, and start and stop the service in 
`KeyManagerImpl`. The following configurations will be added to specify the 
service's behavior:
 # ozone.open.key.cleanup.service.interval: Frequency the service should run.
 # ozone.open.key.expire.threshold: Time from creation after which an open key 
is deemed expired.
 # ozone.open.key.cleanup.limit.per.task: Maximum number of keys the service 
can mark for deletion on each run.

Default values for these configurations will be chosen from HDFS data.

 

  was:
Finish the existing implementation of OpenKeyDeletingService#call. Start the 
OpenKeyCleanupService from the KeyManagerImpl class. Read from the existing 
configuration setting ozone.open.key.expire.threshold to set the frequency that 
the service should run. Add integration tests to verify its functionality.

 

Implement the `OpenKeyCleanupService` class, and start and stop the service in 
`KeyManagerImpl`. The following configurations will be added to specify the 
service's behavior:

1. ozone.open.key.cleanup.service.interval: Frequency the service should run.
2. ozone.open.key.expire.threshold: Time from creation after which an open key 
is deemed expired.
3. ozone.open.key.cleanup.limit.per.task


> Integrate OM Open Key Cleanup Service Into Existing Code
> 
>
> Key: HDDS-4123
> URL: https://issues.apache.org/jira/browse/HDDS-4123
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>
> Implement the `OpenKeyCleanupService` class, and start and stop the service 
> in `KeyManagerImpl`. The following configurations will be added to specify 
> the service's behavior:
>  # ozone.open.key.cleanup.service.interval: Frequency the service should run.
>  # ozone.open.key.expire.threshold: Time from creation after which an open 
> key is deemed expired.
>  # ozone.open.key.cleanup.limit.per.task: Maximum number of keys the service 
> can mark for deletion on each run.
> Default values for these configurations will be chosen from HDFS data.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4123) Integrate OM Open Key Cleanup Service Into Existing Code

2020-10-21 Thread Ethan Rose (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Rose updated HDDS-4123:
-
Description: 
Finish the existing implementation of OpenKeyDeletingService#call. Start the 
OpenKeyCleanupService from the KeyManagerImpl class. Read from the existing 
configuration setting ozone.open.key.expire.threshold to set the frequency that 
the service should run. Add integration tests to verify its functionality.

 

Implement the `OpenKeyCleanupService` class, and start and stop the service in 
`KeyManagerImpl`. The following configurations will be added to specify the 
service's behavior:

1. ozone.open.key.cleanup.service.interval: Frequency the service should run.
2. ozone.open.key.expire.threshold: Time from creation after which an open key 
is deemed expired.
3. ozone.open.key.cleanup.limit.per.task

  was:Finish the existing implementation of OpenKeyDeletingService#call. Start 
the OpenKeyCleanupService from the KeyManagerImpl class. Read from the existing 
configuration setting ozone.open.key.expire.threshold to set the frequency that 
the service should run. Add integration tests to verify its functionality.


> Integrate OM Open Key Cleanup Service Into Existing Code
> 
>
> Key: HDDS-4123
> URL: https://issues.apache.org/jira/browse/HDDS-4123
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>
> Finish the existing implementation of OpenKeyDeletingService#call. Start the 
> OpenKeyCleanupService from the KeyManagerImpl class. Read from the existing 
> configuration setting ozone.open.key.expire.threshold to set the frequency 
> that the service should run. Add integration tests to verify its 
> functionality.
>  
> Implement the `OpenKeyCleanupService` class, and start and stop the service 
> in `KeyManagerImpl`. The following configurations will be added to specify 
> the service's behavior:
> 1. ozone.open.key.cleanup.service.interval: Frequency the service should run.
> 2. ozone.open.key.expire.threshold: Time from creation after which an open 
> key is deemed expired.
> 3. ozone.open.key.cleanup.limit.per.task



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4363) Add metric to track the number of RocksDB open/close operations

2020-10-21 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDDS-4363:
--
Target Version/s: 1.1.0

> Add metric to track the number of RocksDB open/close operations
> ---
>
> Key: HDDS-4363
> URL: https://issues.apache.org/jira/browse/HDDS-4363
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 1.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Aryan Gupta
>Priority: Major
>
> We are benchmarking Ozone performance, and realized RocksDB open/close 
> operations have huge impact to performance. Each db open takes about 70ms on 
> average and close takes about 1ms on average.
>  
> Having metrics on these operations will help understand DataNode performance 
> problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4363) Add metric to track the number of RocksDB open/close operations

2020-10-21 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-4363:
---

Assignee: Aryan Gupta

> Add metric to track the number of RocksDB open/close operations
> ---
>
> Key: HDDS-4363
> URL: https://issues.apache.org/jira/browse/HDDS-4363
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Affects Versions: 1.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Aryan Gupta
>Priority: Major
>
> We are benchmarking Ozone performance, and realized RocksDB open/close 
> operations have huge impact to performance. Each db open takes about 70ms on 
> average and close takes about 1ms on average.
>  
> Having metrics on these operations will help understand DataNode performance 
> problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4363) Add metric to track the number of RocksDB open/close operations

2020-10-21 Thread Wei-Chiu Chuang (Jira)
Wei-Chiu Chuang created HDDS-4363:
-

 Summary: Add metric to track the number of RocksDB open/close 
operations
 Key: HDDS-4363
 URL: https://issues.apache.org/jira/browse/HDDS-4363
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Datanode
Affects Versions: 1.0.0
Reporter: Wei-Chiu Chuang


We are benchmarking Ozone performance, and realized RocksDB open/close 
operations have huge impact to performance. Each db open takes about 70ms on 
average and close takes about 1ms on average.

 

Having metrics on these operations will help understand DataNode performance 
problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509548807



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2272,167 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+List fileStatusList = new ArrayList<>();
+if (numEntries <= 0) {
+  return fileStatusList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+// A map sorted by OmKey to combine results from TableCache and DB.
+TreeMap cacheKeyMap = new TreeMap<>();
+String seekKeyInDB = "";
+long prefixKeyInDB = Long.MIN_VALUE;
+String prefixPath = keyName;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// keyName is a directory
+if (fileStatus.getKeyInfo() != null) {
+  seekKeyInDB = fileStatus.getKeyInfo().getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  seekKeyInDB = omBucketInfo.getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+  } else {
+// startKey will be used in iterator seek and sets the beginning point
+// for key traversal.
+// key name will be used as parent ID where the user has requested to
+// list the keys from.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null);
+if (fileStatusInfo != null) {
+  prefixKeyInDB = fileStatusInfo.getKeyInfo().getParentObjectID();
+  seekKeyInDB = prefixKeyInDB + OZONE_URI_DELIMITER
+  + fileStatusInfo.getKeyInfo().getFileName();
+}
+  }
+
+  // Not required to search in TableCache because all the deleted keys
+  // are marked directly in directory table or in key table by breaking
+  // the pointer to its sub-dirs. So, there is no issue of inconsistency.
+  int countEntries = 0;

Review comment:
   Thanks @linyiqun for pointing out this case. I have implemented File & 
Dir Table cache logic in latest commit. As the plan is to update the Dir & 
FileTable directly during delete api, ideally there is no need of considering 
deletedTableCache. Will revisit this logic once we implement the delete api.
   
   I have commented in [HDDS-4358 
jira](https://issues.apache.org/jira/browse/HDDS-4358?focusedCommentId=17218477=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17218477)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4358) Delete : make delete an atomic ops for leaf node(empty directory or file)

2020-10-21 Thread Rakesh Radhakrishnan (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17218477#comment-17218477
 ] 

Rakesh Radhakrishnan commented on HDDS-4358:


+Note:+ Please revisit {{ozoneFS#listStatus}} and {{ozoneFS#getFileStatus}} 
implementation and do necessary changes, if needed.

> Delete : make delete an atomic ops for leaf node(empty directory or file)
> -
>
> Key: HDDS-4358
> URL: https://issues.apache.org/jira/browse/HDDS-4358
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Rakesh Radhakrishnan
>Assignee: Rakesh Radhakrishnan
>Priority: Major
>
> This task handles only empty directory and file deletions. Recursive deletes 
> will be handled separately in another Jira task.
> Here in this jira, we consider only new Ozone FS client talking to new OM 
> server. Later, I will raise separate Jira task to handle compatibilities 
> across different client/server versions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509543716



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2272,167 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+List fileStatusList = new ArrayList<>();
+if (numEntries <= 0) {
+  return fileStatusList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+// A map sorted by OmKey to combine results from TableCache and DB.
+TreeMap cacheKeyMap = new TreeMap<>();
+String seekKeyInDB = "";
+long prefixKeyInDB = Long.MIN_VALUE;
+String prefixPath = keyName;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// keyName is a directory
+if (fileStatus.getKeyInfo() != null) {
+  seekKeyInDB = fileStatus.getKeyInfo().getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  seekKeyInDB = omBucketInfo.getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+  } else {
+// startKey will be used in iterator seek and sets the beginning point
+// for key traversal.
+// key name will be used as parent ID where the user has requested to
+// list the keys from.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null);
+if (fileStatusInfo != null) {
+  prefixKeyInDB = fileStatusInfo.getKeyInfo().getParentObjectID();
+  seekKeyInDB = prefixKeyInDB + OZONE_URI_DELIMITER
+  + fileStatusInfo.getKeyInfo().getFileName();
+}
+  }
+
+  // Not required to search in TableCache because all the deleted keys
+  // are marked directly in directory table or in key table by breaking
+  // the pointer to its sub-dirs. So, there is no issue of inconsistency.
+  int countEntries = 0;
+  // Seek the given key in key table.
+  countEntries = getFilesFromDirectory(cacheKeyMap, seekKeyInDB,
+  prefixPath, countEntries, numEntries, prefixKeyInDB);
+  // Seek the given key in dir table.
+  Table dirTable = metadataManager.getDirectoryTable();
+  TableIterator>
+  iterator = dirTable.iterator();
+
+  iterator.seek(seekKeyInDB);
+
+  while (iterator.hasNext() && numEntries - countEntries > 0) {
+String entryInDb = iterator.key();
+OmDirectoryInfo dirInfo = iterator.value().getValue();
+if (!isImmediateChild(dirInfo.getParentObjectID(), prefixKeyInDB)) {
+  break;
+}
+
+if (recursive) {
+  // for recursive list all the entries
+  prefixPath = OMFileRequest.getAbsolutePath(prefixPath,
+  dirInfo.getName());
+  OmKeyInfo omKeyInfo = OMFileRequest.getOmKeyInfo(volumeName,
+  bucketName, dirInfo, prefixPath);
+  cacheKeyMap.put(entryInDb,
+  new OzoneFileStatus(omKeyInfo, 0, true));
+  ++countEntries;
+  // files from this directory
+  seekKeyInDB = dirInfo.getObjectID() + OZONE_URI_DELIMITER;

Review comment:
   Done

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -583,4 +588,112 @@ public static OmKeyInfo getOmKeyInfoFromFileTable(boolean 
openFileTable,
 return dbOmKeyInfo;
   }
 
+  /**
+   * Gets OmKeyInfo if exists for the given key name in the DB.
+   *
+   * @param omMetadataMgr metadata manager
+   * @param volumeNamevolume name
+   * @param bucketNamebucket name
+   * @param keyName   key name
+   * @param scmBlockSize  scm block size
+   * @return OzoneFileStatus
+   * @throws IOException DB failure
+   */
+  @Nullable
+  public static OzoneFileStatus getOMKeyInfoIfExists(
+  OMMetadataManager omMetadataMgr, String volumeName, String bucketName,
+  String keyName, long scmBlockSize) throws IOException 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509543376



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2272,167 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+List fileStatusList = new ArrayList<>();
+if (numEntries <= 0) {
+  return fileStatusList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+// A map sorted by OmKey to combine results from TableCache and DB.
+TreeMap cacheKeyMap = new TreeMap<>();
+String seekKeyInDB = "";
+long prefixKeyInDB = Long.MIN_VALUE;
+String prefixPath = keyName;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// keyName is a directory
+if (fileStatus.getKeyInfo() != null) {
+  seekKeyInDB = fileStatus.getKeyInfo().getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  seekKeyInDB = omBucketInfo.getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+  } else {
+// startKey will be used in iterator seek and sets the beginning point
+// for key traversal.
+// key name will be used as parent ID where the user has requested to
+// list the keys from.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null);

Review comment:
   Good point. I kept this as an open point now and not addressed in my 
latest commit. Need to do some more testing to understand the existing behavior.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509543538



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2272,167 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+List fileStatusList = new ArrayList<>();
+if (numEntries <= 0) {
+  return fileStatusList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+// A map sorted by OmKey to combine results from TableCache and DB.
+TreeMap cacheKeyMap = new TreeMap<>();
+String seekKeyInDB = "";
+long prefixKeyInDB = Long.MIN_VALUE;
+String prefixPath = keyName;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// keyName is a directory
+if (fileStatus.getKeyInfo() != null) {
+  seekKeyInDB = fileStatus.getKeyInfo().getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  seekKeyInDB = omBucketInfo.getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+  } else {
+// startKey will be used in iterator seek and sets the beginning point
+// for key traversal.
+// key name will be used as parent ID where the user has requested to
+// list the keys from.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null);
+if (fileStatusInfo != null) {
+  prefixKeyInDB = fileStatusInfo.getKeyInfo().getParentObjectID();
+  seekKeyInDB = prefixKeyInDB + OZONE_URI_DELIMITER
+  + fileStatusInfo.getKeyInfo().getFileName();
+}
+  }
+
+  // Not required to search in TableCache because all the deleted keys
+  // are marked directly in directory table or in key table by breaking
+  // the pointer to its sub-dirs. So, there is no issue of inconsistency.
+  int countEntries = 0;
+  // Seek the given key in key table.
+  countEntries = getFilesFromDirectory(cacheKeyMap, seekKeyInDB,
+  prefixPath, countEntries, numEntries, prefixKeyInDB);
+  // Seek the given key in dir table.
+  Table dirTable = metadataManager.getDirectoryTable();
+  TableIterator>
+  iterator = dirTable.iterator();
+
+  iterator.seek(seekKeyInDB);
+
+  while (iterator.hasNext() && numEntries - countEntries > 0) {
+String entryInDb = iterator.key();
+OmDirectoryInfo dirInfo = iterator.value().getValue();
+if (!isImmediateChild(dirInfo.getParentObjectID(), prefixKeyInDB)) {
+  break;
+}
+
+if (recursive) {
+  // for recursive list all the entries
+  prefixPath = OMFileRequest.getAbsolutePath(prefixPath,
+  dirInfo.getName());
+  OmKeyInfo omKeyInfo = OMFileRequest.getOmKeyInfo(volumeName,
+  bucketName, dirInfo, prefixPath);
+  cacheKeyMap.put(entryInDb,
+  new OzoneFileStatus(omKeyInfo, 0, true));
+  ++countEntries;
+  // files from this directory
+  seekKeyInDB = dirInfo.getObjectID() + OZONE_URI_DELIMITER;
+  countEntries = getFilesFromDirectory(cacheKeyMap, seekKeyInDB,
+  prefixPath, countEntries, numEntries, prefixKeyInDB);
+} else {
+  String dirName = OMFileRequest.getAbsolutePath(prefixPath,
+  dirInfo.getName());
+  OmKeyInfo omKeyInfo = OMFileRequest.getOmKeyInfo(volumeName,
+  bucketName, dirInfo, dirName);
+  cacheKeyMap.put(entryInDb,
+  new OzoneFileStatus(omKeyInfo, 0, true));
+  countEntries++;
+}
+// move to next entry in the table
+iterator.next();
+  }
+} finally {
+  metadataManager.getLock().releaseReadLock(BUCKET_LOCK, volumeName,
+  bucketName);
+}
+
+int countEntries = 0;
+// Convert results in cacheKeyMap to List
+for (Map.Entry entry : 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509535852



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2272,167 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+List fileStatusList = new ArrayList<>();
+if (numEntries <= 0) {
+  return fileStatusList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+// A map sorted by OmKey to combine results from TableCache and DB.
+TreeMap cacheKeyMap = new TreeMap<>();
+String seekKeyInDB = "";
+long prefixKeyInDB = Long.MIN_VALUE;
+String prefixPath = keyName;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// keyName is a directory
+if (fileStatus.getKeyInfo() != null) {
+  seekKeyInDB = fileStatus.getKeyInfo().getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  seekKeyInDB = omBucketInfo.getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = omBucketInfo.getObjectID();
+}
+  } else {
+// startKey will be used in iterator seek and sets the beginning point
+// for key traversal.
+// key name will be used as parent ID where the user has requested to
+// list the keys from.
+OzoneFileStatus fileStatusInfo = getOzoneFileStatusV1(volumeName,
+bucketName, startKey, false, null);
+if (fileStatusInfo != null) {
+  prefixKeyInDB = fileStatusInfo.getKeyInfo().getParentObjectID();
+  seekKeyInDB = prefixKeyInDB + OZONE_URI_DELIMITER
+  + fileStatusInfo.getKeyInfo().getFileName();
+}
+  }
+
+  // Not required to search in TableCache because all the deleted keys
+  // are marked directly in directory table or in key table by breaking
+  // the pointer to its sub-dirs. So, there is no issue of inconsistency.
+  int countEntries = 0;
+  // Seek the given key in key table.
+  countEntries = getFilesFromDirectory(cacheKeyMap, seekKeyInDB,
+  prefixPath, countEntries, numEntries, prefixKeyInDB);
+  // Seek the given key in dir table.
+  Table dirTable = metadataManager.getDirectoryTable();
+  TableIterator>
+  iterator = dirTable.iterator();
+
+  iterator.seek(seekKeyInDB);
+
+  while (iterator.hasNext() && numEntries - countEntries > 0) {
+String entryInDb = iterator.key();
+OmDirectoryInfo dirInfo = iterator.value().getValue();
+if (!isImmediateChild(dirInfo.getParentObjectID(), prefixKeyInDB)) {
+  break;
+}
+
+if (recursive) {
+  // for recursive list all the entries
+  prefixPath = OMFileRequest.getAbsolutePath(prefixPath,
+  dirInfo.getName());
+  OmKeyInfo omKeyInfo = OMFileRequest.getOmKeyInfo(volumeName,
+  bucketName, dirInfo, prefixPath);
+  cacheKeyMap.put(entryInDb,
+  new OzoneFileStatus(omKeyInfo, 0, true));
+  ++countEntries;
+  // files from this directory
+  seekKeyInDB = dirInfo.getObjectID() + OZONE_URI_DELIMITER;
+  countEntries = getFilesFromDirectory(cacheKeyMap, seekKeyInDB,
+  prefixPath, countEntries, numEntries, prefixKeyInDB);
+} else {
+  String dirName = OMFileRequest.getAbsolutePath(prefixPath,
+  dirInfo.getName());
+  OmKeyInfo omKeyInfo = OMFileRequest.getOmKeyInfo(volumeName,
+  bucketName, dirInfo, dirName);
+  cacheKeyMap.put(entryInDb,
+  new OzoneFileStatus(omKeyInfo, 0, true));
+  countEntries++;
+}
+// move to next entry in the table
+iterator.next();
+  }
+} finally {
+  metadataManager.getLock().releaseReadLock(BUCKET_LOCK, volumeName,
+  bucketName);
+}
+
+int countEntries = 0;
+// Convert results in cacheKeyMap to List
+for (Map.Entry entry : 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


rakeshadr commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509534970



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -1831,6 +1838,62 @@ private OzoneFileStatus getOzoneFileStatus(String 
volumeName,
 FILE_NOT_FOUND);
   }
 
+
+  private OzoneFileStatus getOzoneFileStatusV1(String volumeName,
+ String bucketName,
+ String keyName,
+ boolean sortDatanodes,
+ String clientAddress)
+  throws IOException {
+OzoneFileStatus fileStatus = null;
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  // Check if this is the root of the filesystem.
+  if (keyName.length() == 0) {
+validateBucket(volumeName, bucketName);
+return new OzoneFileStatus();
+  }
+
+  fileStatus = OMFileRequest.getOMKeyInfoIfExists(metadataManager,
+  volumeName, bucketName, keyName, scmBlockSize);
+
+  // Check if the key is a directory.

Review comment:
   Done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4359) Expose VolumeIOStats in DN JMX

2020-10-21 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4359?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-4359:
-
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Expose VolumeIOStats in DN JMX
> --
>
> Key: HDDS-4359
> URL: https://issues.apache.org/jira/browse/HDDS-4359
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Datanode
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Expose VolumeIOStats in DN JMX web endpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl merged pull request #1506: HDDS-4359. Expose VolumeIOStats in DN JMX

2020-10-21 Thread GitBox


smengcl merged pull request #1506:
URL: https://github.com/apache/hadoop-ozone/pull/1506


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel merged pull request #1491: HDDS-4340. Add Operational State to the datanode list command

2020-10-21 Thread GitBox


sodonnel merged pull request #1491:
URL: https://github.com/apache/hadoop-ozone/pull/1491


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4362) Change hadoop32 test to use 3.2 image

2020-10-21 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4362:
--

 Summary: Change hadoop32 test to use 3.2 image
 Key: HDDS-4362
 URL: https://issues.apache.org/jira/browse/HDDS-4362
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


{{ozone-mr/hadoop32}} acceptance test currently uses "latest" {{hadoop:3}} 
docker image, which is currently Hadoop 3.2.  If it gets updated to Hadoop 3.3, 
Ozone acceptance test will be broken.  We should explicitly use some 3.2 
release-based image.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel merged pull request #1501: HDDS-4323. Add integration tests for putting nodes into maintenance and fix any issues uncovered in the tests

2020-10-21 Thread GitBox


sodonnel merged pull request #1501:
URL: https://github.com/apache/hadoop-ozone/pull/1501


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4323) Add integration tests for putting nodes into maintenance and fix any issues uncovered in the tests

2020-10-21 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell resolved HDDS-4323.
-
Fix Version/s: 1.1.0
   Resolution: Fixed

> Add integration tests for putting nodes into maintenance and fix any issues 
> uncovered in the tests
> --
>
> Key: HDDS-4323
> URL: https://issues.apache.org/jira/browse/HDDS-4323
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 1.1.0
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Add a series of intergration tests to prove nodes can enter and leave 
> maintenance correctly and address any issues in the code when adding the tests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng opened a new pull request #1510: [Draft]HDDS-4191: failover proxy for container location protocol

2020-10-21 Thread GitBox


GlenGeng opened a new pull request #1510:
URL: https://github.com/apache/hadoop-ozone/pull/1510


   ## What changes were proposed in this pull request?
   
   (Please fill in changes proposed in this fix)
   
   ## What is the link to the Apache JIRA
   
   (Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HDDS-. Fix a typo in YYY.)
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   
   (Please explain how this patch was tested. Ex: unit tests, manual tests)
   (If this patch involves UI changes, please attach a screen-shot; otherwise, 
remove this)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng closed pull request #1509: HDDS-4191: Please ignore.

2020-10-21 Thread GitBox


GlenGeng closed pull request #1509:
URL: https://github.com/apache/hadoop-ozone/pull/1509


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4191) Add failover proxy for SCM container client

2020-10-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4191:
-
Labels: pull-request-available  (was: )

> Add failover proxy for SCM container client
> ---
>
> Key: HDDS-4191
> URL: https://issues.apache.org/jira/browse/HDDS-4191
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Assignee: Li Cheng
>Priority: Major
>  Labels: pull-request-available
>
> Take advantage of failover proxy in HDDS-3188 and have failover proxy for SCM 
> container client as well



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng opened a new pull request #1509: HDDS-4191: Please ignore.

2020-10-21 Thread GitBox


GlenGeng opened a new pull request #1509:
URL: https://github.com/apache/hadoop-ozone/pull/1509


   ## What changes were proposed in this pull request?
   
   (Please fill in changes proposed in this fix)
   
   ## What is the link to the Apache JIRA
   
   (Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HDDS-. Fix a typo in YYY.)
   
   Please replace this section with the link to the Apache JIRA)
   
   ## How was this patch tested?
   
   (Please explain how this patch was tested. Ex: unit tests, manual tests)
   (If this patch involves UI changes, please attach a screen-shot; otherwise, 
remove this)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1503: HDDS-4332: ListFileStatus - do lookup in directory and file tables

2020-10-21 Thread GitBox


linyiqun commented on a change in pull request #1503:
URL: https://github.com/apache/hadoop-ozone/pull/1503#discussion_r509163394



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/KeyManagerImpl.java
##
@@ -2205,6 +2272,167 @@ private void listStatusFindKeyInTableCache(
 return fileStatusList;
   }
 
+  public List listStatusV1(OmKeyArgs args, boolean recursive,
+  String startKey, long numEntries, String clientAddress)
+  throws IOException {
+Preconditions.checkNotNull(args, "Key args can not be null");
+
+List fileStatusList = new ArrayList<>();
+if (numEntries <= 0) {
+  return fileStatusList;
+}
+
+String volumeName = args.getVolumeName();
+String bucketName = args.getBucketName();
+String keyName = args.getKeyName();
+// A map sorted by OmKey to combine results from TableCache and DB.
+TreeMap cacheKeyMap = new TreeMap<>();
+String seekKeyInDB = "";
+long prefixKeyInDB = Long.MIN_VALUE;
+String prefixPath = keyName;
+
+metadataManager.getLock().acquireReadLock(BUCKET_LOCK, volumeName,
+bucketName);
+try {
+  if (Strings.isNullOrEmpty(startKey)) {
+OzoneFileStatus fileStatus = getFileStatus(args, clientAddress);
+if (fileStatus.isFile()) {
+  return Collections.singletonList(fileStatus);
+}
+
+// keyName is a directory
+if (fileStatus.getKeyInfo() != null) {
+  seekKeyInDB = fileStatus.getKeyInfo().getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = fileStatus.getKeyInfo().getObjectID();
+} else {
+  String bucketKey = metadataManager.getBucketKey(volumeName,
+  bucketName);
+  OmBucketInfo omBucketInfo =
+  metadataManager.getBucketTable().get(bucketKey);
+  seekKeyInDB = omBucketInfo.getObjectID()
+  + OZONE_URI_DELIMITER;
+  prefixKeyInDB = omBucketInfo.getObjectID();

Review comment:
   Okay, get it.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #1498: HDDS-4339. Allow AWSSignatureProcessor init when aws signature is absent.

2020-10-21 Thread GitBox


timmylicheng commented on a change in pull request #1498:
URL: https://github.com/apache/hadoop-ozone/pull/1498#discussion_r509089823



##
File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
##
@@ -116,6 +116,13 @@ private OzoneClient getClient(OzoneConfiguration config) 
throws IOException {
 }
   }
 
+  // ONLY validate aws access id when needed.
+  private void validateAccessId(String awsAccessId) throws Exception {
+if (awsAccessId == null || awsAccessId.equals("")) {
+  throw S3_AUTHINFO_CREATION_ERROR;

Review comment:
   Open https://issues.apache.org/jira/browse/HDDS-4361 to track proper s3 
errors back to bad requests.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4361) S3 native error messages when header is illegal

2020-10-21 Thread Li Cheng (Jira)
Li Cheng created HDDS-4361:
--

 Summary: S3 native error messages when header is illegal
 Key: HDDS-4361
 URL: https://issues.apache.org/jira/browse/HDDS-4361
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: S3
Affects Versions: 1.0.0
Reporter: Li Cheng


Following up on https://issues.apache.org/jira/browse/HDDS-4339 and 
https://issues.apache.org/jira/browse/HDDS-3843, missing auth or other info in 
header may cause S3Client to throw NPE or log an error message.

Rather than that, s3g should return s3 native error messages to requests with 
invalid header or other reasons. However, s3client should be able to 
initialized. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #1506: HDDS-4359. Expose VolumeIOStats in DN JMX

2020-10-21 Thread GitBox


smengcl commented on pull request #1506:
URL: https://github.com/apache/hadoop-ozone/pull/1506#issuecomment-713336320


   > I see you have manually tested this change and share the result above.
   > +1 from me, : ).
   
   Thanks for reviewing and +1'ing this @linyiqun :)
   Just added test in `TestContainerMetrics`. I'll wait for a green run.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org