[jira] [Updated] (HDDS-3513) OzoneConfiguration missing ozone-site.xml by default.

2020-05-05 Thread Simon Su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Su updated HDDS-3513:
---
Labels: pull-requests-available  (was: pull-request-available)

> OzoneConfiguration missing ozone-site.xml by default.
> -
>
> Key: HDDS-3513
> URL: https://issues.apache.org/jira/browse/HDDS-3513
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.5.0
>Reporter: Simon Su
>Assignee: Simon Su
>Priority: Minor
>  Labels: pull-requests-available
> Fix For: 0.6.0
>
> Attachments: add_ozone_conf.patch
>
>
> OzoneConfiguration missing load ozone-site.xml by default, this may cause 
> some issues when setup a secure ozone cluster.
> When we start a S3Gateway in secure mode (Kerberos), when start S3Gateway 
> http server, it will use UserGroupInformation to check whether in security 
> mode, which will load Configuration to check whether 
> "hadoop.security.authentication" set to "KERBEROS". but unfortunately, 
> default configuration will only load "core-site.xml, core-default.xml, 
> hdfs-site.xml, hdfs-default.xml ozone-default.xml" and missing 
> *{color:#ff}ozone-site.xml, {color} it*{color:#ff} 
> {color:#172b4d}means we have to configure "hadoop.security.authentication" in 
> one of default 5 config files if we want to start a secure 
> S3Gateway.{color}{color}
> It's better to add ozone-site.xml into OzoneConfiguration by default, so we 
> don't need to make one same configuration in different part.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3544) Support multiple objects delete

2020-05-05 Thread Sammi Chen (Jira)
Sammi Chen created HDDS-3544:


 Summary: Support multiple objects delete
 Key: HDDS-3544
 URL: https://issues.apache.org/jira/browse/HDDS-3544
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Sammi Chen


Currently it's estimated that 1key/s delete speed.  It will cost almost 3hours 
to delete 10K keys.  We need a multi-key delete command to speed up the 
deletion. 

Initial idea is to provide a file to the command, the file contains all the key 
names to delete. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #767: HDDS-3064. Get Key is hung when READ delay is injected in chunk file path

2020-05-05 Thread GitBox


bshashikant commented on pull request #767:
URL: https://github.com/apache/hadoop-ozone/pull/767#issuecomment-624468971


   @elek , can we commit this now?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 commented on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 commented on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624428578


   I have uploaded another patch, do you think this is more feasible ?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 commented on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 commented on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624428184


   I have make another patch to add OzoneConfiguration to UGI when start 
Gateway.
   Do you think this is feasible ? 
   `diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
   index 5d34181d6..467dd7107 100644
   --- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
   +++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
   @@ -24,6 +24,8 @@
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.tracing.TracingUtil;
   
   +import org.apache.hadoop.security.SecurityUtil;
   +import org.apache.hadoop.security.UserGroupInformation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import picocli.CommandLine.Command;
   @@ -51,6 +53,11 @@ public Void call() throws Exception {
TracingUtil.initTracing("S3gateway", ozoneConfiguration);
OzoneConfigurationHolder.setConfiguration(ozoneConfiguration);
ozoneConfiguration.set("hadoop.http.authentication.type", "simple");
   +if (SecurityUtil.getAuthenticationMethod(ozoneConfiguration).equals(
   +  UserGroupInformation.AuthenticationMethod.KERBEROS)){
   +  UserGroupInformation.setConfiguration(ozoneConfiguration);
   +}
   +
httpServer = new S3GatewayHttpServer(ozoneConfiguration, "s3gateway");
start();
return null;`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 removed a comment on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 removed a comment on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624428184


   I have make another patch to add OzoneConfiguration to UGI when start 
Gateway.
   Do you think this is feasible ? 
   `diff --git 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
   index 5d34181d6..467dd7107 100644
   --- 
a/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
   +++ 
b/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/Gateway.java
   @@ -24,6 +24,8 @@
import org.apache.hadoop.hdds.conf.OzoneConfiguration;
import org.apache.hadoop.hdds.tracing.TracingUtil;
   
   +import org.apache.hadoop.security.SecurityUtil;
   +import org.apache.hadoop.security.UserGroupInformation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import picocli.CommandLine.Command;
   @@ -51,6 +53,11 @@ public Void call() throws Exception {
TracingUtil.initTracing("S3gateway", ozoneConfiguration);
OzoneConfigurationHolder.setConfiguration(ozoneConfiguration);
ozoneConfiguration.set("hadoop.http.authentication.type", "simple");
   +if (SecurityUtil.getAuthenticationMethod(ozoneConfiguration).equals(
   +  UserGroupInformation.AuthenticationMethod.KERBEROS)){
   +  UserGroupInformation.setConfiguration(ozoneConfiguration);
   +}
   +
httpServer = new S3GatewayHttpServer(ozoneConfiguration, "s3gateway");
start();
return null;`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 edited a comment on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 edited a comment on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624422878


   > Hi @Simon0806
   > Thanks for the contribution and welcome to Ozone.
   > 
   > OzoneConfiguration() calls loadDefaults() and in loadDefaults() we add 
ozone-site.xml.
   > I see this has been already taken care of by the below code.
   > 
   > ```
   >public OzoneConfiguration() {
   >  OzoneConfiguration.activate();
   >  loadDefaults();
   >}
   > ```
   > 
   > ```
   >   private void loadDefaults() {
   > try {
   >   //there could be multiple ozone-default-generated.xml files on the
   >   // classpath, which are generated by the annotation processor.
   >   // Here we add all of them to the list of the available 
configuration.
   >   Enumeration generatedDefaults =
   >   OzoneConfiguration.class.getClassLoader().getResources(
   >   "ozone-default-generated.xml");
   >   while (generatedDefaults.hasMoreElements()) {
   > addResource(generatedDefaults.nextElement());
   >   }
   > } catch (IOException e) {
   >   e.printStackTrace();
   > }
   > addResource("ozone-site.xml");
   >   }
   > ```
   
   Hi Bharat 
   Actually it's true but they are different behaviors. the line you mentioned 
was add ozone-site.xml into resources belong to OzoneConfiguration but will not 
add ozone-site.xml into defaultResources belong to Configuration. The 
initialize process of S3Gateway will use UserGroupInformation.isSecurityEnable 
to check if security has been configured.
Here is the key point code:
   ```
   
private static void ensureInitialized() {
   if (!isInitialized()) {
 synchronized(UserGroupInformation.class) {
   if (!isInitialized()) { // someone might have beat us
 initialize(new Configuration(), false);
   }
 }
   }
 }
   ```
   
   Here it called new Configuration() which will load defaultResources, which 
only contains 
   core-site.xml, hdfs-site.xml, core-default.xml, hdfs-default.xml and 
ozone-default.xml.
   While when initializing OM and SCM it will pass the OzoneConfiguration to do 
initialize but S3Gateway it use Configuration by default. 
   
   Or I can call a UserGroupInformation.setConfiguration(conf) when startup a 
S3Gateway service to pass the OzoneConfiguration to UserGroupInformation.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 edited a comment on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 edited a comment on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624422878


   > Hi @Simon0806
   > Thanks for the contribution and welcome to Ozone.
   > 
   > OzoneConfiguration() calls loadDefaults() and in loadDefaults() we add 
ozone-site.xml.
   > I see this has been already taken care of by the below code.
   > 
   > ```
   >public OzoneConfiguration() {
   >  OzoneConfiguration.activate();
   >  loadDefaults();
   >}
   > ```
   > 
   > ```
   >   private void loadDefaults() {
   > try {
   >   //there could be multiple ozone-default-generated.xml files on the
   >   // classpath, which are generated by the annotation processor.
   >   // Here we add all of them to the list of the available 
configuration.
   >   Enumeration generatedDefaults =
   >   OzoneConfiguration.class.getClassLoader().getResources(
   >   "ozone-default-generated.xml");
   >   while (generatedDefaults.hasMoreElements()) {
   > addResource(generatedDefaults.nextElement());
   >   }
   > } catch (IOException e) {
   >   e.printStackTrace();
   > }
   > addResource("ozone-site.xml");
   >   }
   > ```
   
   Hi Bharat 
   Actually it's true but they are different behaviors. the line you mentioned 
was add ozone-site.xml into resources belong to OzoneConfiguration but will not 
add ozone-site.xml into defaultResources belong to Configuration. The 
initialize process of S3Gateway will use UserGroupInformation.isSecurityEnable 
to check if security has been configured.
Here is the key point code:
   ```
   
private static void ensureInitialized() {
   if (!isInitialized()) {
 synchronized(UserGroupInformation.class) {
   if (!isInitialized()) { // someone might have beat us
 initialize(new Configuration(), false);
   }
 }
   }
 }
   ```
   
   Here it called new Configuration() which will load defaultResources, which 
only contains 
   core-site.xml, hdfs-site.xml, core-default.xml, hdfs-default.xml and 
ozone-default.xml.
   While when initializing OM and SCM it will pass the OzoneConfiguration to do 
initialize but S3Gateway it use Configuration by default. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 edited a comment on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 edited a comment on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624422878


   > Hi @Simon0806
   > Thanks for the contribution and welcome to Ozone.
   > 
   > OzoneConfiguration() calls loadDefaults() and in loadDefaults() we add 
ozone-site.xml.
   > I see this has been already taken care of by the below code.
   > 
   > ```
   >public OzoneConfiguration() {
   >  OzoneConfiguration.activate();
   >  loadDefaults();
   >}
   > ```
   > 
   > ```
   >   private void loadDefaults() {
   > try {
   >   //there could be multiple ozone-default-generated.xml files on the
   >   // classpath, which are generated by the annotation processor.
   >   // Here we add all of them to the list of the available 
configuration.
   >   Enumeration generatedDefaults =
   >   OzoneConfiguration.class.getClassLoader().getResources(
   >   "ozone-default-generated.xml");
   >   while (generatedDefaults.hasMoreElements()) {
   > addResource(generatedDefaults.nextElement());
   >   }
   > } catch (IOException e) {
   >   e.printStackTrace();
   > }
   > addResource("ozone-site.xml");
   >   }
   > ```
   
   Hi Bharat 
   Actually it's true but they are different behaviors. the line you mentioned 
was add ozone-site.xml into resources belong to OzoneConfiguration but will not 
add ozone-site.xml into defaultResources belong to Configuration. The 
initialize process of S3Gateway will use UserGroupInformation.isSecurityEnable 
to check if security has been configured.
Here is the key point code:
   ```
private static void ensureInitialized() {
   if (!isInitialized()) {
 synchronized(UserGroupInformation.class) {
   if (!isInitialized()) { // someone might have beat us
 initialize(**new Configuration()**, false);
   }
 }
   }
 }
   ```
   
   Here it called new Configuration() which will load defaultResources, which 
only contains 
   core-site.xml, hdfs-site.xml, core-default.xml, hdfs-default.xml and 
ozone-default.xml.
   While when initializing OM and SCM it will pass the OzoneConfiguration to do 
initialize but S3Gateway it use Configuration by default. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 commented on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 commented on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624422878


   > Hi @Simon0806
   > Thanks for the contribution and welcome to Ozone.
   > 
   > OzoneConfiguration() calls loadDefaults() and in loadDefaults() we add 
ozone-site.xml.
   > I see this has been already taken care of by the below code.
   > 
   > ```
   >public OzoneConfiguration() {
   >  OzoneConfiguration.activate();
   >  loadDefaults();
   >}
   > ```
   > 
   > ```
   >   private void loadDefaults() {
   > try {
   >   //there could be multiple ozone-default-generated.xml files on the
   >   // classpath, which are generated by the annotation processor.
   >   // Here we add all of them to the list of the available 
configuration.
   >   Enumeration generatedDefaults =
   >   OzoneConfiguration.class.getClassLoader().getResources(
   >   "ozone-default-generated.xml");
   >   while (generatedDefaults.hasMoreElements()) {
   > addResource(generatedDefaults.nextElement());
   >   }
   > } catch (IOException e) {
   >   e.printStackTrace();
   > }
   > addResource("ozone-site.xml");
   >   }
   > ```
   
   Hi Bharat 
   Actually it's true but they are different behaviors. the line you mentioned 
was add ozone-site.xml into resources belong to OzoneConfiguration but will not 
add ozone-site.xml into defaultResources belong to Configuration. The 
initialize process of S3Gateway will use UserGroupInformation.isSecurityEnable 
to check if security has been configured.
Here is the key point code:
private static void ensureInitialized() {
   if (!isInitialized()) {
 synchronized(UserGroupInformation.class) {
   if (!isInitialized()) { // someone might have beat us
 initialize(**new Configuration()**, false);
   }
 }
   }
 }
   
   Here it called new Configuration() which will load defaultResources, which 
only contains 
   core-site.xml, hdfs-site.xml, core-default.xml, hdfs-default.xml and 
ozone-default.xml.
   While when initializing OM and SCM it will pass the OzoneConfiguration to do 
initialize but S3Gateway it use Configuration by default. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 commented on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 commented on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624420003


   > Hi @Simon0806
   > Thanks for the contribution and welcome to Ozone.
   > 
   > OzoneConfiguration() calls loadDefaults() and in loadDefaults() we add 
ozone-site.xml.
   > I see this has been already taken care of by the below code.
   > 
   > ```
   >public OzoneConfiguration() {
   >  OzoneConfiguration.activate();
   >  loadDefaults();
   >}
   > ```
   > 
   > ```
   >   private void loadDefaults() {
   > try {
   >   //there could be multiple ozone-default-generated.xml files on the
   >   // classpath, which are generated by the annotation processor.
   >   // Here we add all of them to the list of the available 
configuration.
   >   Enumeration generatedDefaults =
   >   OzoneConfiguration.class.getClassLoader().getResources(
   >   "ozone-default-generated.xml");
   >   while (generatedDefaults.hasMoreElements()) {
   > addResource(generatedDefaults.nextElement());
   >   }
   > } catch (IOException e) {
   >   e.printStackTrace();
   > }
   > addResource("ozone-site.xml");
   >   }
   > ```
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #895: HDDS-3518: Add a freon generator to create directory tree and create …

2020-05-05 Thread GitBox


rakeshadr commented on a change in pull request #895:
URL: https://github.com/apache/hadoop-ozone/pull/895#discussion_r420522148



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/TestHadoopDirTreeGenerator.java
##
@@ -0,0 +1,182 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.ratis.server.impl.RaftServerImpl;
+import org.apache.ratis.server.raftlog.RaftLog;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.event.Level;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.net.URI;
+
+/**
+ * Test for HadoopNestedDirTreeGenerator.
+ */
+public class TestHadoopDirTreeGenerator {
+
+  private String path;
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   *
+   * @throws IOException
+   */
+  @Before
+  public void setup() {
+path = GenericTestUtils
+.getTempPath(TestOzoneClientKeyGenerator.class.getSimpleName());
+GenericTestUtils.setLogLevel(RaftLog.LOG, Level.DEBUG);
+GenericTestUtils.setLogLevel(RaftServerImpl.LOG, Level.DEBUG);
+File baseDir = new File(path);
+baseDir.mkdirs();
+  }
+
+  /**
+   * Shutdown MiniDFSCluster.
+   */
+  private void shutdown(MiniOzoneCluster cluster) throws IOException {
+if (cluster != null) {
+  cluster.shutdown();
+  FileUtils.deleteDirectory(new File(path));
+}
+  }
+
+  private MiniOzoneCluster startCluster(OzoneConfiguration conf)
+  throws Exception {
+if (conf == null) {
+  conf = new OzoneConfiguration();
+}
+MiniOzoneCluster cluster = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(5)
+.build();
+
+cluster.waitForClusterToBeReady();
+cluster.waitTobeOutOfSafeMode();
+return cluster;
+  }
+
+  @Test
+  public void testNestedDirTreeGeneration() throws Exception {
+OzoneConfiguration conf = new OzoneConfiguration();
+MiniOzoneCluster cluster = startCluster(conf);
+ObjectStore store =

Review comment:
   Sure, I will update this in next PR commit.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #895: HDDS-3518: Add a freon generator to create directory tree and create …

2020-05-05 Thread GitBox


rakeshadr commented on a change in pull request #895:
URL: https://github.com/apache/hadoop-ozone/pull/895#discussion_r420521952



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopDirTreeGenerator.java
##
@@ -0,0 +1,218 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+import java.net.URI;
+import java.util.Random;
+import java.util.concurrent.Callable;
+
+/**
+ * Directory & File Generator tool to test OM performance.
+ */
+@Command(name = "dtsg",
+aliases = "dfs-tree-generator",
+description =
+"Create nested directories and create given number of files in each " +
+"dir in any dfs compatible file system.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class HadoopDirTreeGenerator extends BaseFreonGenerator
+implements Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(HadoopDirTreeGenerator.class);
+
+  @Option(names = {"-r", "--rpath"},
+  description = "Hadoop FS directory system path",

Review comment:
   Thanks @xiaoyuyao for the reviews.
   
   Yes, its "Hadoop FS root path". I will modify to make it clear.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


bharatviswa504 commented on pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897#issuecomment-624415196


   Hi @Simon0806 
   Thanks for the contribution and welcome to Ozone.
   
   OzoneConfiguration() calls loadDefaults() and in loadDefaults() we add 
ozone-site.xml. 
   I see this has been already taken care of by the below code.
   
   
   
   ```
  public OzoneConfiguration() {
OzoneConfiguration.activate();
loadDefaults();
  }
   ```
   
   
   ```
 private void loadDefaults() {
   try {
 //there could be multiple ozone-default-generated.xml files on the
 // classpath, which are generated by the annotation processor.
 // Here we add all of them to the list of the available configuration.
 Enumeration generatedDefaults =
 OzoneConfiguration.class.getClassLoader().getResources(
 "ozone-default-generated.xml");
 while (generatedDefaults.hasMoreElements()) {
   addResource(generatedDefaults.nextElement());
 }
   } catch (IOException e) {
 e.printStackTrace();
   }
   addResource("ozone-site.xml");
 }
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420517212



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+targetReplicas.addAll(replicationInFlight);
+
+int delta = replicationFactor - getReplicaCount(id, replicas);
+final ContainerPlacementStatus placementStatus =
+containerPlacement.validateContainerPlacement(
+targetReplicas, replicationFactor);
+final int misRepDelta = placementStatus.additionalReplicaRequired();

Review comment:
   Where we have a difference is two cases with replication factor of 3:
- we have only 1 replica, we need two replica in one rack (delta = 2, 
misRepDelta = 1)
- we have 2 replica in different racks, we need one replica and zero rack 
(delta = 1, misRepDelta = 0)
   
   I see this as something that still implicitly brings in rack awareness into 
ReplicationManager via an arbitrary int value, that comes from a method which 
has a name, that - for me at least - does not suggest this subtle but 
fundamental difference at all, that is why I started to check into this and 
suggested to get rid of it. I don't see the "it might be good to report this 
for Recon" as a good reason to increase the complexity, if we want this in 
Recon, we can add a new status implementation with this value and its accessor 
added for rack aware policies and we can grab the value from that, if it will 
be needed.
   For the same reason (implicit rack awareness) I am unhappy about the check 
in lines from L562, and suggested a way there if we can achieve it.
   
   I understand the statement "a container has a valid placement", as "we have 
the replication factor amount of replicas placed as we want them to be placed". 
And my suggestion reflects this understanding. Isn't this the good way to look 
at it? If not, why, what is the gain if we look it differently?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3513) OzoneConfiguration missing ozone-site.xml by default.

2020-05-05 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3513:
-
Labels: pull-request-available  (was: )

> OzoneConfiguration missing ozone-site.xml by default.
> -
>
> Key: HDDS-3513
> URL: https://issues.apache.org/jira/browse/HDDS-3513
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.5.0
>Reporter: Simon Su
>Assignee: Simon Su
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.6.0
>
> Attachments: add_ozone_conf.patch
>
>
> OzoneConfiguration missing load ozone-site.xml by default, this may cause 
> some issues when setup a secure ozone cluster.
> When we start a S3Gateway in secure mode (Kerberos), when start S3Gateway 
> http server, it will use UserGroupInformation to check whether in security 
> mode, which will load Configuration to check whether 
> "hadoop.security.authentication" set to "KERBEROS". but unfortunately, 
> default configuration will only load "core-site.xml, core-default.xml, 
> hdfs-site.xml, hdfs-default.xml ozone-default.xml" and missing 
> *{color:#ff}ozone-site.xml, {color} it*{color:#ff} 
> {color:#172b4d}means we have to configure "hadoop.security.authentication" in 
> one of default 5 config files if we want to start a secure 
> S3Gateway.{color}{color}
> It's better to add ozone-site.xml into OzoneConfiguration by default, so we 
> don't need to make one same configuration in different part.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] Simon0806 opened a new pull request #897: HDDS-3513. add ozone-site.xml by default

2020-05-05 Thread GitBox


Simon0806 opened a new pull request #897:
URL: https://github.com/apache/hadoop-ozone/pull/897


   ## What changes were proposed in this pull request?
   
   Add ozone-site.xml to OzoneConfiguration by default. In secure mode , 
otherwise security items configured in ozone-site.xml will not be effected when 
startup a secure s3 gateway. we have to configure these in core-site.xml which 
makes a little confused.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3513
   
   ## How was this patch tested?
   setup a secure ozone cluster and configure security items in ozone-site.xml 
followed by document mentioned,  then start each service.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420512977



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+targetReplicas.addAll(replicationInFlight);
+
+int delta = replicationFactor - getReplicaCount(id, replicas);
+final ContainerPlacementStatus placementStatus =
+containerPlacement.validateContainerPlacement(
+targetReplicas, replicationFactor);
+final int misRepDelta = placementStatus.additionalReplicaRequired();
+final int replicasNeeded
+= delta < misRepDelta ? misRepDelta : delta;
+
 final List excludeList = replicas.stream()
 .map(ContainerReplica::getDatanodeDetails)
 .collect(Collectors.toList());
+excludeList.addAll(replicationInFlight);
 List actionList = inflightReplication.get(id);
 if (actionList != null) {
   actionList.stream().map(r -> r.datanode)
   .forEach(excludeList::add);
 }
 final List selectedDatanodes = containerPlacement
-.chooseDatanodes(excludeList, null, delta,
+.chooseDatanodes(excludeList, null, replicasNeeded,
 container.getUsedBytes());
-
-LOG.info("Container {} is under replicated. Expected replica count" +
-" is {}, but found {}.", id, replicationFactor,
-replicationFactor - delta);
-
-for (DatanodeDetails datanode : selectedDatanodes) {
-  sendReplicateCommand(container, datanode, source);
+if (delta > 0) {
+  LOG.info("Container {} is under replicated. Expected replica count" +
+  " is {}, but found {}.", id, replicationFactor,
+  replicationFactor - delta);
+}
+int newMisRepDelta = misRepDelta;
+if (misRepDelta > 0) {
+  LOG.info("Container: {}. {}",
+  id, placementStatus.misReplicatedReason());
+  // Check if the new target nodes (original plus newly selected nodes)
+  // makes the placement policy valid.
+  targetReplicas.addAll(selectedDatanodes);
+  newMisRepDelta = containerPlacement.validateContainerPlacement(
+  targetReplicas, replicationFactor).additionalReplicaRequired();

Review comment:
   Right, I see your point... I am not sure, haven't checked, but I guess 
based on NetworkTopology, and maybe NodeManager we can tell how many healthy 
racks we have, and if we have one, and the fallback is enabled, then we can 
report the number of required replicas to be zero in the case we got down to 1 
live rack only, while if fallback is not enabled, then we should report 1 
additional replica to be required, and give a warning in the logs as the policy 
can not be met.
   I think it would be more clear to handle that in the validation of rack 
aware policies if we can. What do you think?
   
   Note that we are hardcoding fallback = true at the moment at the only place 
where we call the policy factory in non-test code.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420509671



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+targetReplicas.addAll(replicationInFlight);
+
+int delta = replicationFactor - getReplicaCount(id, replicas);
+final ContainerPlacementStatus placementStatus =
+containerPlacement.validateContainerPlacement(
+targetReplicas, replicationFactor);
+final int misRepDelta = placementStatus.additionalReplicaRequired();
+final int replicasNeeded
+= delta < misRepDelta ? misRepDelta : delta;
+
 final List excludeList = replicas.stream()
 .map(ContainerReplica::getDatanodeDetails)
 .collect(Collectors.toList());
+excludeList.addAll(replicationInFlight);

Review comment:
   👍 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3513) OzoneConfiguration missing ozone-site.xml by default.

2020-05-05 Thread Simon Su (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Su updated HDDS-3513:
---
   Attachment: add_ozone_conf.patch
Affects Version/s: 0.5.0
   Status: Patch Available  (was: In Progress)

> OzoneConfiguration missing ozone-site.xml by default.
> -
>
> Key: HDDS-3513
> URL: https://issues.apache.org/jira/browse/HDDS-3513
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.5.0
>Reporter: Simon Su
>Assignee: Simon Su
>Priority: Minor
> Fix For: 0.6.0
>
> Attachments: add_ozone_conf.patch
>
>
> OzoneConfiguration missing load ozone-site.xml by default, this may cause 
> some issues when setup a secure ozone cluster.
> When we start a S3Gateway in secure mode (Kerberos), when start S3Gateway 
> http server, it will use UserGroupInformation to check whether in security 
> mode, which will load Configuration to check whether 
> "hadoop.security.authentication" set to "KERBEROS". but unfortunately, 
> default configuration will only load "core-site.xml, core-default.xml, 
> hdfs-site.xml, hdfs-default.xml ozone-default.xml" and missing 
> *{color:#ff}ozone-site.xml, {color} it*{color:#ff} 
> {color:#172b4d}means we have to configure "hadoop.security.authentication" in 
> one of default 5 config files if we want to start a secure 
> S3Gateway.{color}{color}
> It's better to add ozone-site.xml into OzoneConfiguration by default, so we 
> don't need to make one same configuration in different part.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420504764



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -393,8 +396,11 @@ private boolean isContainerHealthy(final ContainerInfo 
container,
*/
   private boolean isContainerUnderReplicated(final ContainerInfo container,
   final Set replicas) {
+boolean misReplicated = !getPlacementStatus(

Review comment:
   Sounds reasonable for me, let's leave it this way, I wanted to be sure 
we considered this when it came to my minds as a possible starting state.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420504414



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
##
@@ -198,4 +211,69 @@ public boolean hasEnoughSpace(DatanodeDetails 
datanodeDetails,
*/
   public abstract DatanodeDetails chooseNode(
   List healthyNodes);
+
+  /**
+   * Default implementation for basic placement policies that do not have a
+   * placement policy. If the policy has not network topology this method 
should
+   * return null.
+   * @return The networkTopology for the policy or null if none is configured.
+   */
+  protected NetworkTopology getNetworkTopology() {
+return null;
+  }
+
+  /**
+   * Default implementation to return the number of racks containers should 
span
+   * to meet the placement policy. For simple policies that are not rack aware
+   * we return 1, from this default implementation.
+   * should have
+   * @return The number of racks containers should span to meet the policy
+   */
+  protected int getRequiredRackCount() {
+return 1;
+  }
+
+  /**
+   * This default implementation handles rack aware policies and non rack
+   * aware policies. If a future placement policy needs to check more than 
racks
+   * to validate the policy (eg node groups, HDFS like upgrade domain) this
+   * method should be overridden in the sub class.
+   * This method requires that subclasses which implement rack aware policies
+   * override the default method getRequiredRackCount and getNetworkTopology.
+   * @param dns List of datanodes holding a replica of the container
+   * @param replicas The expected number of replicas
+   * @return ContainerPlacementStatus indicating if the placement policy is
+   * met or not. Not this only considers the rack count and not the
+   * number of replicas.
+   */
+  @Override
+  public ContainerPlacementStatus validateContainerPlacement(
+  List dns, int replicas) {
+NetworkTopology topology = getNetworkTopology();
+int requiredRacks = getRequiredRackCount();
+if (topology == null || replicas == 1 || requiredRacks == 1) {
+  if (dns.size() > 0) {
+// placement is always satisfied if there is at least one DN.
+return validPlacement;
+  } else {
+return invalidPlacement;
+  }
+}

Review comment:
   I am afraid that anyone who gets to HDDS-3079 will not find this out, 
maybe don't even check... It would be better to duplicate, as we feel the pain 
more and more of this, or at least to mention this method as well as the target 
for the JIRA ;)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420503824



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
##
@@ -198,4 +211,69 @@ public boolean hasEnoughSpace(DatanodeDetails 
datanodeDetails,
*/
   public abstract DatanodeDetails chooseNode(
   List healthyNodes);
+
+  /**
+   * Default implementation for basic placement policies that do not have a
+   * placement policy. If the policy has not network topology this method 
should
+   * return null.
+   * @return The networkTopology for the policy or null if none is configured.
+   */
+  protected NetworkTopology getNetworkTopology() {

Review comment:
   Sounds right, let's do so. 👍 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #683: HDDS-3149. Make o3fs support set and get acl

2020-05-05 Thread GitBox


captainzmc commented on pull request #683:
URL: https://github.com/apache/hadoop-ozone/pull/683#issuecomment-624400438


   Hi @lokeshj1703, I see this part of the implementation in ofs (hdds-2991) as 
well, the implementations of the two acls should be consistent, and there 
should be common methods on both sides. To avoid any overlap, I will 
temporarily close the pr until hdds-2991 is merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3382) OzoneManager fails to apply log index because of FNFE

2020-05-05 Thread Hanisha Koneru (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100279#comment-17100279
 ] 

Hanisha Koneru commented on HDDS-3382:
--

[~msingh], so we need to ensure that the SegmentedRaftLogCache does not evict 
the log until the SegmentedRaftLogWorker finishes the rename task?

Opened RATIS-923 to track this in Ratis.

> OzoneManager fails to apply log index because of FNFE
> -
>
> Key: HDDS-3382
> URL: https://issues.apache.org/jira/browse/HDDS-3382
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Mukul Kumar Singh
>Priority: Major
>  Labels: MiniOzoneChaosCluster
> Attachments: log.zip
>
>
> OzoneManager fails to apply log index because of FNFE
> It fails because of the following exception.
> {code}
> 2020-04-12 21:58:41,019 [omNode-1@group-D62218D261DE-SegmentedRaftLogWorker] 
> INFO  segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:execute(541)) - 
> omNode-1@group-D62218D261DE-SegmentedRaftLogWorker:
> Rolled log segment from 
> /tmp/chaos-2020-04-12-21-57-56-IST/MiniOzoneClusterImpl-e8cfabca-aa87-41d3-91fd-5530e06ac6ad/omNode-1/ratis/b870c9eb-edfb-36b5-b758-d62218d261de/current/log_inprogress_20626
>  to /tmp/chaos-2
> 020-04-12-21-57-56-IST/MiniOzoneClusterImpl-e8cfabca-aa87-41d3-91fd-5530e06ac6ad/omNode-1/ratis/b870c9eb-edfb-36b5-b758-d62218d261de/current/log_20626-20675
> 2020-04-12 21:58:41,019 [omNode-1@group-D62218D261DE-StateMachineUpdater] 
> ERROR segmented.SegmentedRaftLogInputStream 
> (SegmentedRaftLogInputStream.java:nextEntry(122)) - caught exception 
> initializing log_20626-206
> 75
> java.io.FileNotFoundException: 
> /tmp/chaos-2020-04-12-21-57-56-IST/MiniOzoneClusterImpl-e8cfabca-aa87-41d3-91fd-5530e06ac6ad/omNode-1/ratis/b870c9eb-edfb-36b5-b758-d62218d261de/current/log_20626-20675
>  (No such file
>  or directory)
> at java.io.FileInputStream.open0(Native Method)
> at java.io.FileInputStream.open(FileInputStream.java:195)
> at java.io.FileInputStream.(FileInputStream.java:138)
> at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogReader.(SegmentedRaftLogReader.java:140)
> at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogInputStream.init(SegmentedRaftLogInputStream.java:94)
> at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLogInputStream.nextEntry(SegmentedRaftLogInputStream.java:120)
> at 
> org.apache.ratis.server.raftlog.segmented.LogSegment.readSegmentFile(LogSegment.java:98)
> at 
> org.apache.ratis.server.raftlog.segmented.LogSegment$LogEntryLoader.load(LogSegment.java:202)
> at 
> org.apache.ratis.server.raftlog.segmented.LogSegment.loadCache(LogSegment.java:309)
> at 
> org.apache.ratis.server.raftlog.segmented.SegmentedRaftLog.get(SegmentedRaftLog.java:292)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.applyLog(StateMachineUpdater.java:219)
> at 
> org.apache.ratis.server.impl.StateMachineUpdater.run(StateMachineUpdater.java:168)
> at java.lang.Thread.run(Thread.java:748)
> 2020-04-12 21:58:41,020 [pool-105-thread-2] INFO  utils.LoadBucket 
> (LoadBucket.java:execute(135)) - Going to opType=Filesystem:DirectoryOp 
> keyName=DirectoryLoadGenerator_694945963 :mkdir
> 2020-04-12 21:58:41,018 [OMDoubleBufferFlushThread] INFO  
> file.OMDirectoryCreateResponse 
> (OMDirectoryCreateResponse.java:addToDBBatch(66)) - 
> resp:DirectoryLoadGenerator_2101090747/
> 2020-04-12 21:58:41,018 [grpc-default-executor-2] INFO  
> segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:rollLogSegment(396)) - 
> omNode-2@group-D62218D261DE-SegmentedRaftLogWorker: Rolling segment log-
> 20626_20675 to index:20675
> 2020-04-12 21:58:41,018 [OMDoubleBufferFlushThread] INFO  
> file.OMDirectoryCreateResponse 
> (OMDirectoryCreateResponse.java:addToDBBatch(66)) - 
> resp:DirectoryLoadGenerator_1172041571/
> 2020-04-12 21:58:41,020 [IPC Server handler 13 on 13988] INFO  
> file.OMDirectoryCreateRequest (OMDirectoryCreateRequest.java:(104)) - 
> req:DirectoryLoadGenerator_694945963
> 2020-04-12 21:58:41,020 [OMDoubleBufferFlushThread] INFO  
> file.OMDirectoryCreateResponse 
> (OMDirectoryCreateResponse.java:addToDBBatch(66)) - 
> resp:DirectoryLoadGenerator_1868267720/
> 2020-04-12 21:58:41,020 [omNode-1@group-D62218D261DE-SegmentedRaftLogWorker] 
> INFO  segmented.SegmentedRaftLogWorker 
> (SegmentedRaftLogWorker.java:execute(583)) - 
> omNode-1@group-D62218D261DE-SegmentedRaftLogWorker:
> created new log segment 
> /tmp/chaos-2020-04-12-21-57-56-IST/MiniOzoneClusterImpl-e8cfabca-aa87-41d3-91fd-5530e06ac6ad/omNode-1/ratis/b870c9eb-edfb-36b5-b758-d62218d261de/current/log_inprogress_20676
> 2020-04-12 

[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #885: HDDS-3514. Fix memory leak of RaftServerImpl

2020-05-05 Thread GitBox


avijayanhwx commented on pull request #885:
URL: https://github.com/apache/hadoop-ozone/pull/885#issuecomment-624295814


   I am working with @runzhiwang on reviewing the ratis changes in RATIS-845. 
Once that is done, I will review this as well.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #880: HDDS-3327. Fix s3api create bucket BUCKET_NOT_FOUND when enable acl.

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #880:
URL: https://github.com/apache/hadoop-ozone/pull/880#discussion_r420384216



##
File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
##
@@ -76,6 +78,8 @@ public void destory() throws IOException {
 
   private OzoneClient getClient(OzoneConfiguration config) throws IOException {
 try {
+  Boolean isAclEnable = config.getBoolean(

Review comment:
   Do we still need this?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #880: HDDS-3327. Fix s3api create bucket BUCKET_NOT_FOUND when enable acl.

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #880:
URL: https://github.com/apache/hadoop-ozone/pull/880#discussion_r420384110



##
File path: 
hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/OzoneClientProducer.java
##
@@ -103,6 +107,13 @@ private OzoneClient getClient(OzoneConfiguration config) 
throws IOException {
   throw S3_AUTHINFO_CREATION_ERROR;
 }
 
+  } else {

Review comment:
   Should we move the createRemoteUser, setLoginUser outside the if 
condition and addToken when security is enabled?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3543) Remove unused joda-time

2020-05-05 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDDS-3543:
--
Description: Joda-time is defined in the pom.xml but it's not used 
anywhere. It should be easy to remove it without problems.  (was: Joda-time is 
defined in the hadoop-project/pom.xml but it's not used anywhere. It should be 
easy to remove it without problems.)

> Remove unused joda-time
> ---
>
> Key: HDDS-3543
> URL: https://issues.apache.org/jira/browse/HDDS-3543
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Joda-time is defined in the pom.xml but it's not used anywhere. It should be 
> easy to remove it without problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3543) Remove unused joda-time

2020-05-05 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDDS-3543:
--
Summary: Remove unused joda-time  (was: CLONE - Remove unused joda-time)

> Remove unused joda-time
> ---
>
> Key: HDDS-3543
> URL: https://issues.apache.org/jira/browse/HDDS-3543
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Joda-time is defined in the hadoop-project/pom.xml but it's not used 
> anywhere. It should be easy to remove it without problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-3543) CLONE - Remove unused joda-time

2020-05-05 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang moved HADOOP-17031 to HDDS-3543:


 Key: HDDS-3543  (was: HADOOP-17031)
Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
 Project: Hadoop Distributed Data Store  (was: Hadoop Common)

> CLONE - Remove unused joda-time
> ---
>
> Key: HDDS-3543
> URL: https://issues.apache.org/jira/browse/HDDS-3543
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> Joda-time is defined in the hadoop-project/pom.xml but it's not used 
> anywhere. It should be easy to remove it without problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #780: HDDS-2800. tools/_index.md translation

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #780:
URL: https://github.com/apache/hadoop-ozone/pull/780#discussion_r420379800



##
File path: hadoop-hdds/docs/content/tools/_index.zh.md
##
@@ -0,0 +1,63 @@
+---
+title: "工具"
+date: "2017-10-10"
+summary: Ozone 为开发者提供一系列方便的工具,本页给出了命令行工具的简要列表。
+menu:
+   main:
+  weight: 8
+---
+
+
+
+Ozone 有一系列管理 Ozone 的命令行工具。
+
+所有的命令都通过 ```ozone``` 脚本调用。
+
+守护进程命令:
+
+   * **scm** - 启动或停止 Storage Container Manager。
+   * **om** -  启动或停止 Ozone Manager。
+   * **datanode** - 启动或停止数据节点。
+   stopped.
+   * **s3g** -
+
+客户端命令:
+
+   * **sh** -  操作 Ozone 中的卷、桶和键的主要命令。
+   * **fs** - 运行一个文件系统命令(类似于 `hdfs dfs`)
+   * **version** - 打印 Ozone 和 HDDS 的版本。
+
+
+管理命令:
+
+   * **admin** - 供管理员和开发者使用的 Ozone 各组件管理命令。
+   * **classpath** - 打印包含 Hadoop jar 包和其它必要库的 CLASSPATH。
+   * **dtutil**- 进行和代理 token 有关的操作。
+   * **envvars** - 列出 Hadoop 的环境变量。
+   * **getconf** -  从 Ozone 配置中读取特定配置值。
+   * **jmxget**  - 从 NameNode 或 DataNode 中获取 JMX 导出的值。

Review comment:
从 NameNode 或 DataNode =》Ozone 服务





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #780: HDDS-2800. tools/_index.md translation

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #780:
URL: https://github.com/apache/hadoop-ozone/pull/780#discussion_r420379286



##
File path: hadoop-hdds/docs/content/tools/_index.zh.md
##
@@ -0,0 +1,63 @@
+---
+title: "工具"
+date: "2017-10-10"
+summary: Ozone 为开发者提供一系列方便的工具,本页给出了命令行工具的简要列表。
+menu:
+   main:
+  weight: 8
+---
+
+
+
+Ozone 有一系列管理 Ozone 的命令行工具。
+
+所有的命令都通过 ```ozone``` 脚本调用。
+
+守护进程命令:
+
+   * **scm** - 启动或停止 Storage Container Manager。
+   * **om** -  启动或停止 Ozone Manager。
+   * **datanode** - 启动或停止数据节点。
+   stopped.
+   * **s3g** -
+
+客户端命令:
+
+   * **sh** -  操作 Ozone 中的卷、桶和键的主要命令。
+   * **fs** - 运行一个文件系统命令(类似于 `hdfs dfs`)
+   * **version** - 打印 Ozone 和 HDDS 的版本。
+
+
+管理命令:
+
+   * **admin** - 供管理员和开发者使用的 Ozone 各组件管理命令。
+   * **classpath** - 打印包含 Hadoop jar 包和其它必要库的 CLASSPATH。
+   * **dtutil**- 进行和代理 token 有关的操作。

Review comment:
   代理 token=》token





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420378927



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractRootDir.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+/**
+ * Ozone contract test for ROOT directory operations.
+ */
+public class ITestRootedOzoneContractRootDir extends
+AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testRmRootRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testMkDirDepth1() {

Review comment:
   Because OFS doesn't support deleting volumes (or buckets) at the moment. 
Even if they are empty.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #780: HDDS-2800. tools/_index.md translation

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #780:
URL: https://github.com/apache/hadoop-ozone/pull/780#discussion_r420378966



##
File path: hadoop-hdds/docs/content/tools/_index.zh.md
##
@@ -0,0 +1,63 @@
+---
+title: "工具"
+date: "2017-10-10"
+summary: Ozone 为开发者提供一系列方便的工具,本页给出了命令行工具的简要列表。
+menu:
+   main:
+  weight: 8
+---
+
+
+
+Ozone 有一系列管理 Ozone 的命令行工具。
+
+所有的命令都通过 ```ozone``` 脚本调用。
+
+守护进程命令:
+
+   * **scm** - 启动或停止 Storage Container Manager。
+   * **om** -  启动或停止 Ozone Manager。
+   * **datanode** - 启动或停止数据节点。
+   stopped.
+   * **s3g** -
+
+客户端命令:
+
+   * **sh** -  操作 Ozone 中的卷、桶和键的主要命令。
+   * **fs** - 运行一个文件系统命令(类似于 `hdfs dfs`)
+   * **version** - 打印 Ozone 和 HDDS 的版本。
+
+
+管理命令:
+
+   * **admin** - 供管理员和开发者使用的 Ozone 各组件管理命令。

Review comment:
各组件管理命令=>管理员命令





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420378374



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractRootDir.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+/**
+ * Ozone contract test for ROOT directory operations.
+ */
+public class ITestRootedOzoneContractRootDir extends
+AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testRmRootRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() {

Review comment:
   Yes. I see `testvol1`, which is specified in 
`RootedOzoneContract#getTestPath`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #523: HDDS-2796. beyond/_index.md translation

2020-05-05 Thread GitBox


xiaoyuyao commented on pull request #523:
URL: https://github.com/apache/hadoop-ozone/pull/523#issuecomment-624281022


   LGTM, 1+.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao edited a comment on pull request #523: HDDS-2796. beyond/_index.md translation

2020-05-05 Thread GitBox


xiaoyuyao edited a comment on pull request #523:
URL: https://github.com/apache/hadoop-ozone/pull/523#issuecomment-624281022


   LGTM, +1.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #742: HDDS-3217. Datanode startup is slow due to iterating container DB 2-3 times.

2020-05-05 Thread GitBox


bharatviswa504 edited a comment on pull request #742:
URL: https://github.com/apache/hadoop-ozone/pull/742#issuecomment-624250696


   Thank You @adoroszlai for the offline discussion. I have removed adding a 
new layout version and updated the code to consider both old and new containers.
   
   Added a test for ContainerReader to check the loading of containers, when 
metadata is in DB and when it is not.
   
   @hanishakoneru Updated the patch from last time, Have a look into it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #742: HDDS-3217. Datanode startup is slow due to iterating container DB 2-3 times.

2020-05-05 Thread GitBox


bharatviswa504 commented on pull request #742:
URL: https://github.com/apache/hadoop-ozone/pull/742#issuecomment-624250696


   Thank You @adoroszlai for the offline discussion. I have removed adding a 
new layout version and updated the code to consider both old and new containers.
   
   @hanishakoneru Updated the patch from last time, Have a look in to it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #691: HDDS-3226. Tracing Ozone Manager DB write batch operations.

2020-05-05 Thread GitBox


xiaoyuyao commented on pull request #691:
URL: https://github.com/apache/hadoop-ozone/pull/691#issuecomment-624244034


   @bharatviswa504 all the tracing logic added to doublebuffer now honor the 
hdds.tracing.enabled configuration, let me know if you have any additional 
comment.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420335990



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractRootDir.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+/**
+ * Ozone contract test for ROOT directory operations.
+ */
+public class ITestRootedOzoneContractRootDir extends
+AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testRmRootRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testMkDirDepth1() {

Review comment:
   this should be like create and delete an empty volume? why do we fail 
here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420335487



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractRootDir.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+/**
+ * Ozone contract test for ROOT directory operations.
+ */
+public class ITestRootedOzoneContractRootDir extends
+AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testRmRootRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() {

Review comment:
   are you seeing the root is not empty even though the test claim to list 
empty root dir?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3449) Enable TestSCMPipelineMetrics test cases

2020-05-05 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan resolved HDDS-3449.
-
Resolution: Fixed

Thank you for fixing this issue [~cku328]. I have merged the Github PR.

> Enable TestSCMPipelineMetrics test cases
> 
>
> Key: HDDS-3449
> URL: https://issues.apache.org/jira/browse/HDDS-3449
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.5.0
>Reporter: Nanda kumar
>Assignee: Neo Yang
>Priority: Major
>  Labels: pull-request-available
>
> Fix and enable TestSCMPipelineMetrics test cases



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #892: HDDS-3449. Enable TestSCMPipelineMetrics test cases

2020-05-05 Thread GitBox


avijayanhwx commented on pull request #892:
URL: https://github.com/apache/hadoop-ozone/pull/892#issuecomment-624241065


   Thank you for the explanation @cku328.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #895: HDDS-3518: Add a freon generator to create directory tree and create …

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #895:
URL: https://github.com/apache/hadoop-ozone/pull/895#discussion_r420316026



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/HadoopDirTreeGenerator.java
##
@@ -0,0 +1,218 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.cli.HddsVersionProvider;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+import java.net.URI;
+import java.util.Random;
+import java.util.concurrent.Callable;
+
+/**
+ * Directory & File Generator tool to test OM performance.
+ */
+@Command(name = "dtsg",
+aliases = "dfs-tree-generator",
+description =
+"Create nested directories and create given number of files in each " +
+"dir in any dfs compatible file system.",
+versionProvider = HddsVersionProvider.class,
+mixinStandardHelpOptions = true,
+showDefaultValues = true)
+public class HadoopDirTreeGenerator extends BaseFreonGenerator
+implements Callable {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(HadoopDirTreeGenerator.class);
+
+  @Option(names = {"-r", "--rpath"},
+  description = "Hadoop FS directory system path",

Review comment:
   "Hadoop FS directory system path" is a bit confusing, do you mean 
"Hadoop FS root path"?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #895: HDDS-3518: Add a freon generator to create directory tree and create …

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #895:
URL: https://github.com/apache/hadoop-ozone/pull/895#discussion_r420312123



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/freon/TestHadoopDirTreeGenerator.java
##
@@ -0,0 +1,182 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+package org.apache.hadoop.ozone.freon;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.apache.ratis.server.impl.RaftServerImpl;
+import org.apache.ratis.server.raftlog.RaftLog;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Test;
+import org.slf4j.event.Level;
+
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.net.URI;
+
+/**
+ * Test for HadoopNestedDirTreeGenerator.
+ */
+public class TestHadoopDirTreeGenerator {
+
+  private String path;
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   *
+   * @throws IOException
+   */
+  @Before
+  public void setup() {
+path = GenericTestUtils
+.getTempPath(TestOzoneClientKeyGenerator.class.getSimpleName());
+GenericTestUtils.setLogLevel(RaftLog.LOG, Level.DEBUG);
+GenericTestUtils.setLogLevel(RaftServerImpl.LOG, Level.DEBUG);
+File baseDir = new File(path);
+baseDir.mkdirs();
+  }
+
+  /**
+   * Shutdown MiniDFSCluster.
+   */
+  private void shutdown(MiniOzoneCluster cluster) throws IOException {
+if (cluster != null) {
+  cluster.shutdown();
+  FileUtils.deleteDirectory(new File(path));
+}
+  }
+
+  private MiniOzoneCluster startCluster(OzoneConfiguration conf)
+  throws Exception {
+if (conf == null) {
+  conf = new OzoneConfiguration();
+}
+MiniOzoneCluster cluster = MiniOzoneCluster.newBuilder(conf)
+.setNumDatanodes(5)
+.build();
+
+cluster.waitForClusterToBeReady();
+cluster.waitTobeOutOfSafeMode();
+return cluster;
+  }
+
+  @Test
+  public void testNestedDirTreeGeneration() throws Exception {
+OzoneConfiguration conf = new OzoneConfiguration();
+MiniOzoneCluster cluster = startCluster(conf);
+ObjectStore store =

Review comment:
   Can we wrap this with try {} finally {shutdown (cluster)} so that the 
cluster is ensured to shutdown if any verification failed? 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on pull request #891: HDDS-3504. Topology Test is Intermitent. Reduce container size to 256MB and blocksize to 64MB

2020-05-05 Thread GitBox


vivekratnavel commented on pull request #891:
URL: https://github.com/apache/hadoop-ozone/pull/891#issuecomment-624218495


   @sodonnel I am happy to merge this change! +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #894: HDDS-3501. OzoneFileStatus should not extend FileStatus

2020-05-05 Thread GitBox


xiaoyuyao commented on a change in pull request #894:
URL: https://github.com/apache/hadoop-ozone/pull/894#discussion_r420304642



##
File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
##
@@ -454,19 +442,23 @@ public BasicKeyInfo next() {
 }
   }
 
-  private FileStatusAdapter toFileStatusAdapter(OzoneFileStatus status) {
+  private FileStatusAdapter toFileStatusAdapter(OzoneFileStatus status,
+  String owner, URI defaultUri, Path workingDir) {
+OmKeyInfo keyInfo = status.getKeyInfo();
+short replication = (short) keyInfo.getFactor().getNumber();
 return new FileStatusAdapter(
-status.getLen(),
-status.getPath(),
+keyInfo.getDataSize(),
+new Path(OZONE_URI_DELIMITER + keyInfo.getKeyName())
+.makeQualified(defaultUri, workingDir),
 status.isDirectory(),
-status.getReplication(),
+replication,
 status.getBlockSize(),
-status.getModificationTime(),
-status.getAccessTime(),
-status.getPermission().toShort(),
-status.getOwner(),
-status.getGroup(),
-status.getPath(),
+keyInfo.getModificationTime(),
+keyInfo.getModificationTime(),
+status.isDirectory() ? (short) 00777 : (short) 00666,

Review comment:
   How do we plan to map the dir/dir permission into Ozone ACLs? It is 
currently hard coded to 777 for dir and 666 for file.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on pull request #770: HDDS-3178. Add unit tests for OMGetDelegationToken Request and Response

2020-05-05 Thread GitBox


hanishakoneru commented on pull request #770:
URL: https://github.com/apache/hadoop-ozone/pull/770#issuecomment-624193955


   Thanks @cxorm for working on this. I will merge this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3178) Add unit tests for OMGetDelegationToken Request and Response

2020-05-05 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru resolved HDDS-3178.
--
Resolution: Fixed

> Add unit tests for OMGetDelegationToken Request and Response
> 
>
> Key: HDDS-3178
> URL: https://issues.apache.org/jira/browse/HDDS-3178
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: OMHATest, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The OM Client Request OMGetDelegationTokenRequest should have unit tests 
> similar to other client requests/ responses to test its functionality.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cku328 commented on pull request #812: HDDS-3161. Block illegal characters when creating keys.

2020-05-05 Thread GitBox


cku328 commented on pull request #812:
URL: https://github.com/apache/hadoop-ozone/pull/812#issuecomment-624182313


   Update message of exception and resolve conflict. (latest master-branch #399)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3542) ozone chunkinfo CLI cannot connect to OM when run from NON OM leader client node

2020-05-05 Thread Hanisha Koneru (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17100083#comment-17100083
 ] 

Hanisha Koneru commented on HDDS-3542:
--

[~sadanand_shenoy] This looks like a configuration issue. If the client node 
config file - ozone-site.xml doesn't have the OM addresses, it will fallback to 
default OM address of 0.0.0.0:9862.

> ozone chunkinfo CLI cannot connect to OM when run from NON OM leader client 
> node
> 
>
> Key: HDDS-3542
> URL: https://issues.apache.org/jira/browse/HDDS-3542
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.6.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
>
> When ozone debug chunkinfo command is run from a client node where OM leader 
> is not present, the operation fails.
>  When run from the client node where OM leader is present, it works fine
> {quote}/opt/cloudera/parcels/CDH/bin/ozone debug chunkinfo 
> o3://ozone1/vol1/buck1/file1 20/04/21 11:04:02 INFO ipc.Client: Retrying 
> connect to server: 0.0.0.0/0.0.0.0:9862. Already tried 0 time(s); retry 
> policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 
> MILLISECONDS) 20/04/21 11:04:03 INFO ipc.Client: Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 
> MILLISECONDS) 20/04/21 11:04:04 INFO ipc.Client: Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 
> MILLISECONDS) 20/04/21 11:04:05 INFO ipc.Client: Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 
> MILLISECONDS) 20/04/21 11:04:06 INFO ipc.Client: Retrying connect to server: 
> 0.0.0.0/0.0.0.0:9862. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 
> MILLISECONDS)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on pull request #855: HDDS-3474. Create transactionInfo Table in OmMetadataManager.

2020-05-05 Thread GitBox


bharatviswa504 edited a comment on pull request #855:
URL: https://github.com/apache/hadoop-ozone/pull/855#issuecomment-624165031


   Hi @elek 
   Any more comments? If no more comments by EOD I will commit this. If any 
more concerns, I can address them in later Jiras.
   
   I want to get this in, as this is blocking further jira's progress.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #855: HDDS-3474. Create transactionInfo Table in OmMetadataManager.

2020-05-05 Thread GitBox


bharatviswa504 commented on pull request #855:
URL: https://github.com/apache/hadoop-ozone/pull/855#issuecomment-624165031


   Hi @elek 
   Any more comments?
   
   I want to get this in, as this is blocking further jira's progress.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


sodonnel commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420203008



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+targetReplicas.addAll(replicationInFlight);
+
+int delta = replicationFactor - getReplicaCount(id, replicas);
+final ContainerPlacementStatus placementStatus =
+containerPlacement.validateContainerPlacement(
+targetReplicas, replicationFactor);
+final int misRepDelta = placementStatus.additionalReplicaRequired();
+final int replicasNeeded
+= delta < misRepDelta ? misRepDelta : delta;
+
 final List excludeList = replicas.stream()
 .map(ContainerReplica::getDatanodeDetails)
 .collect(Collectors.toList());
+excludeList.addAll(replicationInFlight);
 List actionList = inflightReplication.get(id);
 if (actionList != null) {
   actionList.stream().map(r -> r.datanode)
   .forEach(excludeList::add);
 }
 final List selectedDatanodes = containerPlacement
-.chooseDatanodes(excludeList, null, delta,
+.chooseDatanodes(excludeList, null, replicasNeeded,
 container.getUsedBytes());
-
-LOG.info("Container {} is under replicated. Expected replica count" +
-" is {}, but found {}.", id, replicationFactor,
-replicationFactor - delta);
-
-for (DatanodeDetails datanode : selectedDatanodes) {
-  sendReplicateCommand(container, datanode, source);
+if (delta > 0) {
+  LOG.info("Container {} is under replicated. Expected replica count" +
+  " is {}, but found {}.", id, replicationFactor,
+  replicationFactor - delta);
+}
+int newMisRepDelta = misRepDelta;
+if (misRepDelta > 0) {
+  LOG.info("Container: {}. {}",
+  id, placementStatus.misReplicatedReason());
+  // Check if the new target nodes (original plus newly selected nodes)
+  // makes the placement policy valid.
+  targetReplicas.addAll(selectedDatanodes);
+  newMisRepDelta = containerPlacement.validateContainerPlacement(
+  targetReplicas, replicationFactor).additionalReplicaRequired();

Review comment:
   > The rack-aware policy's chooseDatanodes() method we use ensures that 
based on the excluded nodes list we get back a proper list
   
   This is where it gets tricky. There is a fallback (certainly in the pipeline 
placement policy, but I have not checked the ContainerPlacementRackAware in 
complete detail) where if the placement policy cannot select enough racks, it 
will give back nodes which do not meet the placement.
   
   This is to ensure (in the case of pipelinePlacementPolicy) that the cluster 
can keep on operating if only 1 rack is available for some reason. For the 
closed container placement, this should be the case too, as if you are down to 
1 replica, it is better it gets replicated onto more nodes on the same rack, 
rather than risk losing the data.
   
   We probably need a future change to scrub the bad pipelines if this fall 
back happens, while replication manager should deal with closed containers.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #890: HDDS-3510. Upgrade Ratis to 0.6.0-2816ea6-SNAPSHOT

2020-05-05 Thread GitBox


lokeshj1703 commented on pull request #890:
URL: https://github.com/apache/hadoop-ozone/pull/890#issuecomment-624115299


   @elek It might be related to HDDS-3223 which added copyLarge function in 
KeyInputStream and added a corresponding test in TestKeyInputStream.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #890: HDDS-3510. Upgrade Ratis to 0.6.0-2816ea6-SNAPSHOT

2020-05-05 Thread GitBox


elek commented on pull request #890:
URL: https://github.com/apache/hadoop-ozone/pull/890#issuecomment-624097739


   @lokeshj1703 Do you have any idea about the `TestKeyInputStream.testSeek` 
failure? We haven't seen it on master. Is it related to the version bump?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://

2020-05-05 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099950#comment-17099950
 ] 

Siyao Meng commented on HDDS-2665:
--

Hi [~linyiqun], yes we do plan to use OFS as the default. Once OFS is *fully* 
implemented we will mark o3fs as deprecated, but probably still keep it around 
for a while.

btw currently o3fs and ofs scheme can coexist on the client (with fs.ofs.impl 
and fs.o3fs.impl), so we don't need to worry about constantly changing config 
when switching from one to the other and then back.

> Implement new Ozone Filesystem scheme ofs://
> 
>
> Key: HDDS-2665
> URL: https://issues.apache.org/jira/browse/HDDS-2665
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: Design ofs v1.pdf
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Implement a new scheme for Ozone Filesystem where all volumes (and buckets) 
> can be access from a single root.
> Alias: Rooted Ozone Filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


sodonnel commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420152320



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+targetReplicas.addAll(replicationInFlight);
+
+int delta = replicationFactor - getReplicaCount(id, replicas);
+final ContainerPlacementStatus placementStatus =
+containerPlacement.validateContainerPlacement(
+targetReplicas, replicationFactor);
+final int misRepDelta = placementStatus.additionalReplicaRequired();
+final int replicasNeeded
+= delta < misRepDelta ? misRepDelta : delta;
+
 final List excludeList = replicas.stream()
 .map(ContainerReplica::getDatanodeDetails)
 .collect(Collectors.toList());
+excludeList.addAll(replicationInFlight);

Review comment:
   Well spotted. I think I can remove the logic after this line, as I 
already have created that list earlier in the method now:
   
   ```
   if (actionList != null) {
 actionList.stream().map(r -> r.datanode)
 .forEach(excludeList::add);
   }
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


sodonnel commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420147161



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+targetReplicas.addAll(replicationInFlight);
+
+int delta = replicationFactor - getReplicaCount(id, replicas);
+final ContainerPlacementStatus placementStatus =
+containerPlacement.validateContainerPlacement(
+targetReplicas, replicationFactor);
+final int misRepDelta = placementStatus.additionalReplicaRequired();

Review comment:
   The PlacementStatus.additionalReplicaRequired() method is intended to 
state how many replicas are need to meet the placement policy and not the 
replication factor. In other places (eg Recon) it might be useful to be able to 
get the "additional racks required" separately from under replication.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


sodonnel commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420147161



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+targetReplicas.addAll(replicationInFlight);
+
+int delta = replicationFactor - getReplicaCount(id, replicas);
+final ContainerPlacementStatus placementStatus =
+containerPlacement.validateContainerPlacement(
+targetReplicas, replicationFactor);
+final int misRepDelta = placementStatus.additionalReplicaRequired();

Review comment:
   The PlacementStatus.additionalReplicaRequired() method is intended to 
state how many replicas are need to meet the placement policy and not the 
replication factor. In other places (eg Recon) it might be useful to be able to 
get the "additional racks required" separately from under replication.
   I've added extra details in the PlacementStatus interface JavaDoc for 
additionalReplicaRequired to make that more clear.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420143047



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/RootedOzoneContract.java
##
@@ -0,0 +1,124 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+import java.util.concurrent.TimeUnit;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+import org.apache.hadoop.hdds.conf.DatanodeRatisServerConfig;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.ratis.RatisHelper;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.om.OMConfigKeys;
+
+import org.junit.Assert;
+
+/**
+ * The contract of Rooted Ozone FileSystem (OFS).
+ */
+class RootedOzoneContract extends AbstractFSContract {
+
+  private static MiniOzoneCluster cluster;
+  private static final String CONTRACT_XML = "contract/ozone.xml";
+
+  RootedOzoneContract(Configuration conf) {
+super(conf);
+//insert the base features
+addConfResource(CONTRACT_XML);
+  }
+
+  @Override
+  public String getScheme() {
+return OzoneConsts.OZONE_OFS_URI_SCHEME;
+  }
+
+  @Override
+  public Path getTestPath() {
+return new Path("/testvol1/testbucket1/test");
+  }
+
+  public static void createCluster() throws IOException {
+OzoneConfiguration conf = new OzoneConfiguration();
+conf.setTimeDuration(
+RatisHelper.HDDS_DATANODE_RATIS_SERVER_PREFIX_KEY + "." +
+DatanodeRatisServerConfig.RATIS_SERVER_REQUEST_TIMEOUT_KEY,
+3, TimeUnit.SECONDS);
+conf.setTimeDuration(
+RatisHelper.HDDS_DATANODE_RATIS_SERVER_PREFIX_KEY + "." +
+DatanodeRatisServerConfig.
+RATIS_SERVER_WATCH_REQUEST_TIMEOUT_KEY,
+10, TimeUnit.SECONDS);
+conf.setTimeDuration(
+RatisHelper.HDDS_DATANODE_RATIS_CLIENT_PREFIX_KEY + "." +
+"rpc.request.timeout",
+3, TimeUnit.SECONDS);
+conf.setTimeDuration(
+RatisHelper.HDDS_DATANODE_RATIS_CLIENT_PREFIX_KEY + "." +
+"watch.request.timeout",
+10, TimeUnit.SECONDS);
+conf.addResource(CONTRACT_XML);
+
+cluster = MiniOzoneCluster.newBuilder(conf).setNumDatanodes(5).build();

Review comment:
   I think we need another set of classes to do this but initialize with 
`MiniOzoneHAClusterImpl` instead.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420141875



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractRootDir.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+/**
+ * Ozone contract test for ROOT directory operations.
+ */
+public class ITestRootedOzoneContractRootDir extends
+AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testRmRootRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testMkDirDepth1() {

Review comment:
   It failed during clean up (assertDeleted):
   ```java
 @Test
 public void testMkDirDepth1() throws Throwable {
   FileSystem fs = getFileSystem();
   Path dir = new Path("/testmkdirdepth1");
   assertPathDoesNotExist("directory already exists", dir);
   fs.mkdirs(dir);
   assertIsDirectory(dir);
   assertPathExists("directory already exists", dir);
   assertDeleted(dir, true);
 }
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


sodonnel commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420141056



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -393,8 +396,11 @@ private boolean isContainerHealthy(final ContainerInfo 
container,
*/
   private boolean isContainerUnderReplicated(final ContainerInfo container,
   final Set replicas) {
+boolean misReplicated = !getPlacementStatus(

Review comment:
   My thinking on this, is that the policy states the replica "must be on 
AT LEAST two racks". I don't believe we should consider it over-replicated if 
it turns out to be on 3 somehow. That logic is consistent with HDFS, and 
probably provides better redundancy and locality (more racks is likely better 
in almost all cases).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420140845



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractRootDir.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+/**
+ * Ozone contract test for ROOT directory operations.
+ */
+public class ITestRootedOzoneContractRootDir extends
+AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testRmRootRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() {

Review comment:
   It fails because the test tries to delete dirs(volumes) at root:
   ```java
 @Test
 public void testListEmptyRootDirectory() throws IOException {
   skipIfUnsupported(TEST_ROOT_TESTS_ENABLED);
   FileSystem fs = getFileSystem();
   Path root = new Path("/");
   FileStatus[] statuses = fs.listStatus(root);
   for (FileStatus status : statuses) {
 ContractTestUtils.assertDeleted(fs, status.getPath(), true);
   }
   ```
   
   ```java
 public static void assertDeleted(FileSystem fs,
 Path file,
 boolean recursive,
 boolean allowRootOperations) throws IOException {
   rejectRootOperation(file, allowRootOperations);
   assertPathExists(fs, "about to be deleted file", file);
   boolean deleted = fs.delete(file, recursive);
   
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420140845



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractRootDir.java
##
@@ -0,0 +1,76 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractRootDirectoryTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+
+/**
+ * Ozone contract test for ROOT directory operations.
+ */
+public class ITestRootedOzoneContractRootDir extends
+AbstractContractRootDirectoryTest {
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void testRmEmptyRootDirNonRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testRmRootRecursive() {
+// OFS doesn't support deleting volumes via rm
+  }
+
+  @Override
+  public void testListEmptyRootDirectory() {

Review comment:
   It fails because the test tries to delete dirs(volumes) at root:
   ```java
 @Test
 public void testListEmptyRootDirectory() throws IOException {
   skipIfUnsupported(TEST_ROOT_TESTS_ENABLED);
   FileSystem fs = getFileSystem();
   Path root = new Path("/");
   FileStatus[] statuses = fs.listStatus(root);
   for (FileStatus status : statuses) {
 ContractTestUtils.assertDeleted(fs, status.getPath(), true);
   }
   ```
   then:
   ```java
 public static void assertDeleted(FileSystem fs,
 Path file,
 boolean recursive,
 boolean allowRootOperations) throws IOException {
   rejectRootOperation(file, allowRootOperations);
   assertPathExists(fs, "about to be deleted file", file);
   boolean deleted = fs.delete(file, recursive);
   
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


sodonnel commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420139242



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
##
@@ -198,4 +211,69 @@ public boolean hasEnoughSpace(DatanodeDetails 
datanodeDetails,
*/
   public abstract DatanodeDetails chooseNode(
   List healthyNodes);
+
+  /**
+   * Default implementation for basic placement policies that do not have a
+   * placement policy. If the policy has not network topology this method 
should
+   * return null.
+   * @return The networkTopology for the policy or null if none is configured.
+   */
+  protected NetworkTopology getNetworkTopology() {
+return null;
+  }
+
+  /**
+   * Default implementation to return the number of racks containers should 
span
+   * to meet the placement policy. For simple policies that are not rack aware
+   * we return 1, from this default implementation.
+   * should have
+   * @return The number of racks containers should span to meet the policy
+   */
+  protected int getRequiredRackCount() {
+return 1;
+  }
+
+  /**
+   * This default implementation handles rack aware policies and non rack
+   * aware policies. If a future placement policy needs to check more than 
racks
+   * to validate the policy (eg node groups, HDFS like upgrade domain) this
+   * method should be overridden in the sub class.
+   * This method requires that subclasses which implement rack aware policies
+   * override the default method getRequiredRackCount and getNetworkTopology.
+   * @param dns List of datanodes holding a replica of the container
+   * @param replicas The expected number of replicas
+   * @return ContainerPlacementStatus indicating if the placement policy is
+   * met or not. Not this only considers the rack count and not the
+   * number of replicas.
+   */
+  @Override
+  public ContainerPlacementStatus validateContainerPlacement(
+  List dns, int replicas) {
+NetworkTopology topology = getNetworkTopology();
+int requiredRacks = getRequiredRackCount();
+if (topology == null || replicas == 1 || requiredRacks == 1) {
+  if (dns.size() > 0) {
+// placement is always satisfied if there is at least one DN.
+return validPlacement;
+  } else {
+return invalidPlacement;
+  }
+}

Review comment:
   I kept it here, to ensure the logic is not duplicated. Ideally, we move 
all the rack aware logic into a common parent and have PipelinePlacementPolicy 
and ContainerPlacementRackAware inherit from that. However there was some push 
back on that earlier, and its probably a non-trivial task.
   
   Rather than duplicating the logic, I think it is better where it is and then 
we have HDDS-3079 (which probably needs renamed) to extract the rack aware 
stuff into one common parent.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420137590



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractGetFileStatus.java
##
@@ -0,0 +1,66 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Ozone contract tests covering getFileStatus.
+ */
+public class ITestRootedOzoneContractGetFileStatus
+extends AbstractContractGetFileStatusTest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestRootedOzoneContractGetFileStatus.class);
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void teardown() throws Exception {

Review comment:
   Copied from o3fs contract test. I believe it is just for the log.
   
   Will remove this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-05 Thread GitBox


smengcl commented on a change in pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#discussion_r420137590



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/contract/rooted/ITestRootedOzoneContractGetFileStatus.java
##
@@ -0,0 +1,66 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone.contract.rooted;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest;
+import org.apache.hadoop.fs.contract.AbstractFSContract;
+
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Ozone contract tests covering getFileStatus.
+ */
+public class ITestRootedOzoneContractGetFileStatus
+extends AbstractContractGetFileStatusTest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestRootedOzoneContractGetFileStatus.class);
+
+  @BeforeClass
+  public static void createCluster() throws IOException {
+RootedOzoneContract.createCluster();
+  }
+
+  @AfterClass
+  public static void teardownCluster() throws IOException {
+RootedOzoneContract.destroyCluster();
+  }
+
+  @Override
+  protected AbstractFSContract createContract(Configuration conf) {
+return new RootedOzoneContract(conf);
+  }
+
+  @Override
+  public void teardown() throws Exception {

Review comment:
   I think it is just for the log. Copied from o3fs contract test.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


sodonnel commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420136338



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
##
@@ -198,4 +211,69 @@ public boolean hasEnoughSpace(DatanodeDetails 
datanodeDetails,
*/
   public abstract DatanodeDetails chooseNode(
   List healthyNodes);
+
+  /**
+   * Default implementation for basic placement policies that do not have a
+   * placement policy. If the policy has not network topology this method 
should
+   * return null.
+   * @return The networkTopology for the policy or null if none is configured.
+   */
+  protected NetworkTopology getNetworkTopology() {

Review comment:
   The constructor for PipelinePlacementPolicy only receives nodeManager:
   
   ``` public PipelinePlacementPolicy(final NodeManager nodeManager,
 final PipelineStateManager stateManager, final ConfigurationSource 
conf) {
   ```
   
   But the container one receives a networkTopology as part of its constructor:
   
   ```
public SCMContainerPlacementRackAware(final NodeManager nodeManager,
 final ConfigurationSource conf, final NetworkTopology networkTopology,
 final boolean fallback, final SCMContainerPlacementMetrics metrics) {
   ```
   
   The pipline Policy always asks the nodeManager for the policy, but the 
Container one uses the one passed. Ideally they should be the same, but I have 
a feeling some tests may make use of this "feature".
   
   The other old placement policies (SCMContainerPlacementRandom, 
SCMContainerPlacementCapacity) have the same constructors, but they don't even 
use the topology at all.
   
   So, I think we can get rid of this new method and then we should create 
another Jira to do the small refactor on all those older constructors to remove 
network topology from the parameter list and ensure they pull the topology from 
the node manager - what do you think?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420070985



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
##
@@ -198,4 +211,69 @@ public boolean hasEnoughSpace(DatanodeDetails 
datanodeDetails,
*/
   public abstract DatanodeDetails chooseNode(
   List healthyNodes);
+
+  /**
+   * Default implementation for basic placement policies that do not have a
+   * placement policy. If the policy has not network topology this method 
should
+   * return null.
+   * @return The networkTopology for the policy or null if none is configured.
+   */
+  protected NetworkTopology getNetworkTopology() {
+return null;
+  }
+
+  /**
+   * Default implementation to return the number of racks containers should 
span
+   * to meet the placement policy. For simple policies that are not rack aware
+   * we return 1, from this default implementation.
+   * should have
+   * @return The number of racks containers should span to meet the policy
+   */
+  protected int getRequiredRackCount() {
+return 1;
+  }
+
+  /**
+   * This default implementation handles rack aware policies and non rack
+   * aware policies. If a future placement policy needs to check more than 
racks
+   * to validate the policy (eg node groups, HDFS like upgrade domain) this
+   * method should be overridden in the sub class.
+   * This method requires that subclasses which implement rack aware policies
+   * override the default method getRequiredRackCount and getNetworkTopology.
+   * @param dns List of datanodes holding a replica of the container
+   * @param replicas The expected number of replicas
+   * @return ContainerPlacementStatus indicating if the placement policy is
+   * met or not. Not this only considers the rack count and not the
+   * number of replicas.
+   */
+  @Override
+  public ContainerPlacementStatus validateContainerPlacement(
+  List dns, int replicas) {
+NetworkTopology topology = getNetworkTopology();
+int requiredRacks = getRequiredRackCount();
+if (topology == null || replicas == 1 || requiredRacks == 1) {
+  if (dns.size() > 0) {
+// placement is always satisfied if there is at least one DN.
+return validPlacement;
+  } else {
+return invalidPlacement;
+  }
+}

Review comment:
   This part of the method is clearly dealing with non-topology-aware 
stuff, and returns, while the rest is dealing with topology-aware stuff. Why we 
don't use polymorphism here to separate the two? I understand that this 
requires copying the topology-aware stuff to two places, but that is an other 
problem we have to deal with at one point.
   So I believe this if should be the default implementation in this class, and 
the rest should be inside the rack aware and pipeline placement policy.
   
   This would render the getRequiredRackCount to be unnecessary as well, as 
here it would be possible to just remove the expression yielding into true 
always for non-topology-aware policies, while the topology aware ones can have 
and use the constant in their implementation.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -393,8 +396,11 @@ private boolean isContainerHealthy(final ContainerInfo 
container,
*/
   private boolean isContainerUnderReplicated(final ContainerInfo container,
   final Set replicas) {
+boolean misReplicated = !getPlacementStatus(

Review comment:
   I understand that this is here to check if we have less than 2 racks 
with replicas, and if so, consider the container to be under replicated.
   I am unsure though if we want to ensure that a container is replicated to 
exactly 2 racks, it does not seems to be the case in the rack aware placement 
policy, but if so, then mis-replication should be checked somewhere else also 
to handle the case when a container is replicated to 3 racks due to for example 
removing a rack from config and place the nodes in a rack to other racks. 
(Probably this can go to isContainerOverReplicated(), if we want to check for 
this case.)

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -512,25 +523,60 @@ private void handleUnderReplicatedContainer(final 
ContainerInfo container,
   if (source.size() > 0) {
 final int replicationFactor = container
 .getReplicationFactor().getNumber();
-final int delta = replicationFactor - getReplicaCount(id, replicas);
+// Want to check if the container is mis-replicated after considering
+// inflight add and delete.
+// Create a new list from source (healthy replicas minus pending 
delete)
+List targetReplicas = new ArrayList<>(source);
+// Then add any pending additions
+   

[GitHub] [hadoop-ozone] fapifta commented on pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#issuecomment-624026257


   I am still not convinced either, but the argument about inflight actions is 
a valid point I am unsure about but let's just drop this.
   
   As I see it will be hard to get to a common ground as we have fairly 
different points of view, I can accept yours, though I still don't believe in 
it :) Let's leave it now as it is, and we will see how it goes in the future.
   
   On the other hand if we go this way, I have a few concerns about the 
implementation details still, I am adding them inline.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on a change in pull request #881: HDDS-3081. Replication manager should detect and correct containers which don't meet the replication policy

2020-05-05 Thread GitBox


fapifta commented on a change in pull request #881:
URL: https://github.com/apache/hadoop-ozone/pull/881#discussion_r420070905



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/SCMCommonPlacementPolicy.java
##
@@ -198,4 +211,69 @@ public boolean hasEnoughSpace(DatanodeDetails 
datanodeDetails,
*/
   public abstract DatanodeDetails chooseNode(
   List healthyNodes);
+
+  /**
+   * Default implementation for basic placement policies that do not have a
+   * placement policy. If the policy has not network topology this method 
should
+   * return null.
+   * @return The networkTopology for the policy or null if none is configured.
+   */
+  protected NetworkTopology getNetworkTopology() {

Review comment:
   Considering that we have the NetworkTopology inside NodeManager, do we 
need this method?
   It is used only in the validateContainerPlacement method where we can use 
NodeManager also.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2424) Add the recover-trash command server side handling.

2020-05-05 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-2424.
---
Fix Version/s: 0.6.0
   Resolution: Fixed

> Add the recover-trash command server side handling.
> ---
>
> Key: HDDS-2424
> URL: https://issues.apache.org/jira/browse/HDDS-2424
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> GAdd the standard server side code for command handling.
> The components should to be updated including
>  *OzoneManager*
>  -> *KeyManager* / *KeyManagerImpl*
>  -> *OMMetadataManager* / *OmMetadataManagerImpl*
>  The PR on this issue also added *OMTrashRecoverRequest* for handling request 
> with OMHA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-05-05 Thread GitBox


bshashikant commented on pull request #399:
URL: https://github.com/apache/hadoop-ozone/pull/399#issuecomment-624019502


   Thanks @cxorm for working on this. I have committed this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sadanand48 commented on a change in pull request #886: HDDS-3473. Ozone chunkinfo CLI should display block file path info

2020-05-05 Thread GitBox


sadanand48 commented on a change in pull request #886:
URL: https://github.com/apache/hadoop-ozone/pull/886#discussion_r420015257



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/ChunkFileUtility.java
##
@@ -0,0 +1,39 @@
+package org.apache.hadoop.ozone.debug;
+
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.container.common.impl.ChunkLayOutVersion;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+
+import java.io.File;
+
+
+public final class ChunkFileUtility {
+
+static String getChunkLocationPath(String containerLocation) {
+return containerLocation + File.separator + 
OzoneConsts.STORAGE_DIR_CHUNKS;
+}
+
+public static String getChunkFilePath(ContainerProtos.ChunkInfo
+  chunkInfo, OmKeyLocationInfo keyLocation,
+  ContainerProtos.ContainerDataProto data,
+  ChunkLayOutVersion layOutVersion)
+  throws StorageContainerException {
+switch (layOutVersion) {
+case FILE_PER_CHUNK:
+  return  getChunkLocationPath(data
+  .getContainerPath())
+  + File.separator
+  + chunkInfo.getChunkName();
+case FILE_PER_BLOCK:
+  return  getChunkLocationPath(data
+  .getContainerPath())
+  + File.separator
+  + keyLocation.getLocalID() + ".block";
+default:
+  throw new StorageContainerException("chunk strategy does not exist",
+  ContainerProtos.Result.UNABLE_TO_FIND_CHUNK);
+}

Review comment:
   Thankyou @adoroszlai for the review. Addressed review comments .





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #891: HDDS-3504. Topology Test is Intermitent. Reduce container size to 256MB and blocksize to 64MB

2020-05-05 Thread GitBox


sodonnel commented on pull request #891:
URL: https://github.com/apache/hadoop-ozone/pull/891#issuecomment-623950283


   This has been run 12 times now. We had several failures on IT tests 
(unrelated to this change) and we have had one failure on Acceptance, but it 
was not the topology change:
   
   ```
   Output:  
/tmp/smoketest/ozonesecure/result/robot-ozonesecure-ozonesecure-recon-scm.xml
   ls: /var/log/hadoop: No such file or directory
   ls: cannot access /var/log/hadoop: No such file or directory
   ```
   
   I think this is as stable as it can get before we commit and see how it goes 
over time.
   
   @vivekratnavel  @adoroszlai Are you happy for me to commit this one at this 
stage?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on pull request #164: HDDS-426. Add field modificationTime for Volume and Bucket

2020-05-05 Thread GitBox


cxorm commented on pull request #164:
URL: https://github.com/apache/hadoop-ozone/pull/164#issuecomment-623922958


   Rebase latest master-branch (#848) to resolve conflict.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on pull request #399: HDDS-2424. Add the recover-trash command server side handling.

2020-05-05 Thread GitBox


cxorm commented on pull request #399:
URL: https://github.com/apache/hadoop-ozone/pull/399#issuecomment-623910954


   Rebase latest master-branch (#848) and trigger github-actions.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://

2020-05-05 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099614#comment-17099614
 ] 

Yiqun Lin edited comment on HDDS-2665 at 5/5/20, 7:49 AM:
--

Hi [~smeng], I go through the implementation details of new ofs scheme. It's 
very similar to o3fs we currently implement but will have a better performance 
than single bucket way in o3fs. So I wonder if we will replace o3fs to use ofs 
once ofs schema is fully implemented? Or both keeping these two schemas and 
letting users to choose which one they prefer to use?


was (Author: linyiqun):
Hi [~smeng], I go though the implementation details of new ofs scheme. It's 
very similar to o3fs we currently implement but will have a better performance 
than single bucket way in o3fs. So I wonder if we will replace o3fs to use ofs 
once ofs schema is fully implemented? Or both keeping these two schemas and 
letting users to choose which one they prefer to use?

> Implement new Ozone Filesystem scheme ofs://
> 
>
> Key: HDDS-2665
> URL: https://issues.apache.org/jira/browse/HDDS-2665
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: Design ofs v1.pdf
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Implement a new scheme for Ozone Filesystem where all volumes (and buckets) 
> can be access from a single root.
> Alias: Rooted Ozone Filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://

2020-05-05 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17099614#comment-17099614
 ] 

Yiqun Lin commented on HDDS-2665:
-

Hi [~smeng], I go though the implementation details of new ofs scheme. It's 
very similar to o3fs we currently implement but will have a better performance 
than single bucket way in o3fs. So I wonder if we will replace o3fs to use ofs 
once ofs schema is fully implemented? Or both keeping these two schemas and 
letting users to choose which one they prefer to use?

> Implement new Ozone Filesystem scheme ofs://
> 
>
> Key: HDDS-2665
> URL: https://issues.apache.org/jira/browse/HDDS-2665
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: Design ofs v1.pdf
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Implement a new scheme for Ozone Filesystem where all volumes (and buckets) 
> can be access from a single root.
> Alias: Rooted Ozone Filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #886: HDDS-3473. Ozone chunkinfo CLI should display block file path info

2020-05-05 Thread GitBox


adoroszlai commented on a change in pull request #886:
URL: https://github.com/apache/hadoop-ozone/pull/886#discussion_r419915270



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/debug/ChunkFileUtility.java
##
@@ -0,0 +1,39 @@
+package org.apache.hadoop.ozone.debug;
+
+import org.apache.hadoop.hdds.protocol.datanode.proto.ContainerProtos;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.StorageContainerException;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.container.common.impl.ChunkLayOutVersion;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+
+import java.io.File;
+
+
+public final class ChunkFileUtility {
+
+static String getChunkLocationPath(String containerLocation) {
+return containerLocation + File.separator + 
OzoneConsts.STORAGE_DIR_CHUNKS;
+}
+
+public static String getChunkFilePath(ContainerProtos.ChunkInfo
+  chunkInfo, OmKeyLocationInfo keyLocation,
+  ContainerProtos.ContainerDataProto data,
+  ChunkLayOutVersion layOutVersion)
+  throws StorageContainerException {
+switch (layOutVersion) {
+case FILE_PER_CHUNK:
+  return  getChunkLocationPath(data
+  .getContainerPath())
+  + File.separator
+  + chunkInfo.getChunkName();
+case FILE_PER_BLOCK:
+  return  getChunkLocationPath(data
+  .getContainerPath())
+  + File.separator
+  + keyLocation.getLocalID() + ".block";
+default:
+  throw new StorageContainerException("chunk strategy does not exist",
+  ContainerProtos.Result.UNABLE_TO_FIND_CHUNK);
+}

Review comment:
   I'd prefer taking chunk file path directly from 
`ChunkLayOutVersion.getChunkFile()` instead of the `switch`.  This way if a new 
enum constant is introduced, this code will work without any changes.
   
   This needs a minor refactoring in `ChunkLayOutVersion`, which accepts a 
`ContainerData` object, to work directly with the chunk dir:
   
   ```diff
   --- 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkLayOutVersion.java
   +++ 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ChunkLayOutVersion.java
   @@ -40,17 +40,15 @@
   
  FILE_PER_CHUNK(1, "One file per chunk") {
@Override
   -public File getChunkFile(ContainerData containerData, BlockID blockID,
   -ChunkInfo info) throws StorageContainerException {
   -  File chunksLoc = verifyChunkDirExists(containerData);
   -  return chunksLoc.toPath().resolve(info.getChunkName()).toFile();
   +public File getChunkFile(File chunkDir, BlockID blockID,
   +ChunkInfo info) {
   +  return chunkDir.toPath().resolve(info.getChunkName()).toFile();
}
  },
  FILE_PER_BLOCK(2, "One file per block") {
@Override
   -public File getChunkFile(ContainerData containerData, BlockID blockID,
   -ChunkInfo info) throws StorageContainerException {
   -  File chunkDir = verifyChunkDirExists(containerData);
   +public File getChunkFile(File chunkDir, BlockID blockID,
   +ChunkInfo info) {
  return new File(chunkDir, blockID.getLocalID() + ".block");
}
  };
   @@ -118,8 +116,14 @@ public String getDescription() {
return description;
  }
   
   -  public abstract File getChunkFile(ContainerData containerData,
   -  BlockID blockID, ChunkInfo info) throws StorageContainerException;
   +  public abstract File getChunkFile(File chunksDir,
   +  BlockID blockID, ChunkInfo info);
   +
   +  public File getChunkFile(ContainerData containerData,
   +  BlockID blockID, ChunkInfo info) throws StorageContainerException {
   +File chunksLoc = verifyChunkDirExists(containerData);
   +return getChunkFile(chunksLoc, blockID, info);
   +  }
   
  @Override
  public String toString() {
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org