[jira] [Updated] (HDDS-2931) Recon integration test should use ephemeral port for HTTP Server

2020-01-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2931:
---
Labels:   (was: pull-request-available)

> Recon integration test should use ephemeral port for HTTP Server
> 
>
> Key: HDDS-2931
> URL: https://issues.apache.org/jira/browse/HDDS-2931
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, it is hard coded to 9888. If we add more tests for Recon, this 
> will lead to port collision during parallel runs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2931) Recon integration test should use ephemeral port for HTTP Server

2020-01-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2931?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-2931.

Resolution: Fixed

> Recon integration test should use ephemeral port for HTTP Server
> 
>
> Key: HDDS-2931
> URL: https://issues.apache.org/jira/browse/HDDS-2931
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Siddharth Wagle
>Priority: Major
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently, it is hard coded to 9888. If we add more tests for Recon, this 
> will lead to port collision during parallel runs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #500: HDDS-2931. Recon integration test should use ephemeral port for HTTP server.

2020-01-29 Thread GitBox
adoroszlai merged pull request #500: HDDS-2931. Recon integration test should 
use ephemeral port for HTTP server.
URL: https://github.com/apache/hadoop-ozone/pull/500
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #500: HDDS-2931. Recon integration test should use ephemeral port for HTTP server.

2020-01-29 Thread GitBox
adoroszlai commented on issue #500: HDDS-2931. Recon integration test should 
use ephemeral port for HTTP server.
URL: https://github.com/apache/hadoop-ozone/pull/500#issuecomment-580123937
 
 
   Thanks @swagle for the fix and @avijayanhwx for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2939) Ozone FS namespace

2020-01-29 Thread Supratim Deka (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026479#comment-17026479
 ] 

Supratim Deka edited comment on HDDS-2939 at 1/30/20 7:32 AM:
--

Thanks for taking a look [~linyiqun]. 
1. ls operation performance : this is outside the scope of work planned under 
HDDS-2939. current focus is to improve the performance of the base namespace 
operations - create file and create directory. we can take up improvement of 
directory listing('ls') after this work achieves desired results.
2. storing child id in directory table parent entry : this has not been 
considered. we might consider such an optimisation only if we are unable to 
meet specific latency requirements in name lookup or directory listing. 
3. prefix locking overhead : the detailed design of the prefix lock mechanism 
has not yet been done. Currently we always operate under a bucket lock in the 
OM - even for Object/Key access and not just for FS operations. So this is not 
a priority now. the prefix locking work will be taken up later. 
At that time, I assume the lock design needs to be such that overhead is 
proportional to the number of directories with "active" access - not 
proportional to the total number of directories that exist. 


was (Author: sdeka):
Thanks for taking a look [~linyiqun]. 
1. ls operation performance : this is outside the scope of work planned under 
HDDS-2939. current focus is to improve the performance of the base namespace 
operations - create file and create directory. we can take up improvement of 
directory listing('ls') after this work achieves desired results.
2. storing child id in directory table parent entry : this has not been 
considered. we might consider such an optimisation only if we are unable to 
meet specific latency requirements in name lookup or directory listing. 
3. prefix locking overhead : the detailed design of the prefix lock mechanism 
has not yet been done. currently we already operate under a bucket lock in the 
OM, so this is not a priority now. the prefix locking work will be taken up 
later. 
At that time, I assume the lock design should be such that the overhead is 
proportional to the number of directories with "active" access - not 
proportional to the number of directories in the namespace. 

> Ozone FS namespace
> --
>
> Key: HDDS-2939
> URL: https://issues.apache.org/jira/browse/HDDS-2939
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: Ozone FS Namespace Proposal v1.0.docx
>
>
> Create the structures and metadata layout required to support efficient FS 
> namespace operations in Ozone - operations involving folders/directories 
> required to support the Hadoop compatible Filesystem interface.
> The details are described in the attached document. The work is divided up 
> into sub-tasks as per the task list in the document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2936) Hive queries fail at readFully

2020-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2936:
-
Labels: pull-request-available  (was: )

> Hive queries fail at readFully
> --
>
> Key: HDDS-2936
> URL: https://issues.apache.org/jira/browse/HDDS-2936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Istvan Fajth
>Assignee: Shashikant Banerjee
>Priority: Critical
>  Labels: pull-request-available
>
> When running Hive queries on a 1TB dataset for TPC-DS tests, we started to 
> see an exception coming out from FSInputStream.readFully.
> This does not happen with a smaller 100GB dataset, so possibly multi block 
> long files are the cause of the trouble, and the issue was not seen with a 
> build from early december, so we most likely to blame a recent change since 
> then. The build I am running with is from the hash 
> 929f2f85d0379aab5aabeded8a4d3a505606 of master branch but with HDDS-2188 
> reverted from the code.
> The exception I see:
> {code}
> Error while running task ( failure ) : 
> attempt_1579615091731_0060_9_05_29_3:java.lang.RuntimeException: 
> java.lang.RuntimeException: java.io.IOException: java.io.EOFException: End of 
> file reached before reading fully.
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: java.io.IOException: 
> java.io.EOFException: End of file reached before reading fully.
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
> at 
> org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:703)
> at 
> org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:662)
> at 
> org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
> at 
> org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:532)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:178)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266)
> ... 16 more
> Caused by: java.io.IOException: java.io.EOFException: End of file reached 
> before reading fully.
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationExc

[GitHub] [hadoop-ozone] bshashikant opened a new pull request #507: HDDS-2936. Hive queries fail at readFully

2020-01-29 Thread GitBox
bshashikant opened a new pull request #507: HDDS-2936. Hive queries fail at 
readFully
URL: https://github.com/apache/hadoop-ozone/pull/507
 
 
   ## What changes were proposed in this pull request?
   
   It fixes in the retry path of ozone client where length of data written was 
getting updated incorrectly while write in KeyouputStream.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2936
   
   
   ## How was this patch tested?
   The existing test "TestCloseContainerHandlingByClient" was failing because 
the issue. All these tests are executing successfully with the fix. The patch 
also tested in a real deployment where HIve workload was run and all hive 
queries succeed now.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2939) Ozone FS namespace

2020-01-29 Thread Supratim Deka (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026479#comment-17026479
 ] 

Supratim Deka commented on HDDS-2939:
-

Thanks for taking a look [~linyiqun]. 
1. ls operation performance : this is outside the scope of work planned under 
HDDS-2939. current focus is to improve the performance of the base namespace 
operations - create file and create directory. we can take up improvement of 
directory listing('ls') after this work achieves desired results.
2. storing child id in directory table parent entry : this has not been 
considered. we might consider such an optimisation only if we are unable to 
meet specific latency requirements in name lookup or directory listing. 
3. prefix locking overhead : the detailed design of the prefix lock mechanism 
has not yet been done. currently we already operate under a bucket lock in the 
OM, so this is not a priority now. the prefix locking work will be taken up 
later. 
At that time, I assume the lock design should be such that the overhead is 
proportional to the number of directories with "active" access - not 
proportional to the number of directories in the namespace. 

> Ozone FS namespace
> --
>
> Key: HDDS-2939
> URL: https://issues.apache.org/jira/browse/HDDS-2939
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: Ozone FS Namespace Proposal v1.0.docx
>
>
> Create the structures and metadata layout required to support efficient FS 
> namespace operations in Ozone - operations involving folders/directories 
> required to support the Hadoop compatible Filesystem interface.
> The details are described in the attached document. The work is divided up 
> into sub-tasks as per the task list in the document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] supratimdeka commented on a change in pull request #498: HDDS-2940. mkdir : create key table entries for intermediate directories in the path

2020-01-29 Thread GitBox
supratimdeka commented on a change in pull request #498: HDDS-2940. mkdir : 
create key table entries for intermediate directories in the path
URL: https://github.com/apache/hadoop-ozone/pull/498#discussion_r372788756
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
 ##
 @@ -84,6 +92,70 @@ public static OMDirectoryResult verifyFilesInPath(
 return OMDirectoryResult.NONE;
   }
 
+  /**
+   * generate the object id from the transaction id.
+   * @param id
+   * @return object id
+   */
+  public static long getObjIdFromTxId(long id) {
+return id << TRANSACTION_ID_SHIFT;
+  }
+
+  /**
+   * Return list of missing parent directories in the given path.
+   * @param omMetadataManager
+   * @param volumeName
+   * @param bucketName
+   * @param keyPath
+   * @return List of keys representing non-existent parent dirs
+   * @throws IOException
+   */
+  public static List getMissingParents(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName,
+  @Nonnull Path keyPath) throws IOException {
+
+List missing = new ArrayList<>();
+
+while (keyPath != null) {
+  String pathName = keyPath.toString();
+
+  String dbKeyName = omMetadataManager.getOzoneKey(volumeName,
+  bucketName, pathName);
+  String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+  bucketName, pathName);
+
+  if (omMetadataManager.getKeyTable().isExist(dbKeyName)) {
+// Found a file in the given path.
+String errorMsg = "File " + dbKeyName + " exists with same name as " +
+" directory in path : " + pathName;
+throw new IOException(errorMsg);
+  } else if (omMetadataManager.getKeyTable().isExist(dbDirKeyName)) {
+// Found a directory in the given path. Higher parents must exist.
+break;
+  } else {
+missing.add(pathName);
+  }
+  keyPath = keyPath.getParent();
+}
+
+return missing;
+  }
+
+  private static OmKeyInfo getKeyInfo(
 
 Review comment:
   thanks for pointing it out, will remove. a relic from an earlier version of 
my patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] supratimdeka commented on a change in pull request #498: HDDS-2940. mkdir : create key table entries for intermediate directories in the path

2020-01-29 Thread GitBox
supratimdeka commented on a change in pull request #498: HDDS-2940. mkdir : 
create key table entries for intermediate directories in the path
URL: https://github.com/apache/hadoop-ozone/pull/498#discussion_r372788152
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
 ##
 @@ -84,6 +92,70 @@ public static OMDirectoryResult verifyFilesInPath(
 return OMDirectoryResult.NONE;
   }
 
+  /**
+   * generate the object id from the transaction id.
+   * @param id
+   * @return object id
+   */
+  public static long getObjIdFromTxId(long id) {
+return id << TRANSACTION_ID_SHIFT;
+  }
+
+  /**
+   * Return list of missing parent directories in the given path.
+   * @param omMetadataManager
+   * @param volumeName
+   * @param bucketName
+   * @param keyPath
+   * @return List of keys representing non-existent parent dirs
+   * @throws IOException
+   */
+  public static List getMissingParents(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName,
+  @Nonnull Path keyPath) throws IOException {
+
+List missing = new ArrayList<>();
+
+while (keyPath != null) {
+  String pathName = keyPath.toString();
+
+  String dbKeyName = omMetadataManager.getOzoneKey(volumeName,
+  bucketName, pathName);
+  String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+  bucketName, pathName);
+
+  if (omMetadataManager.getKeyTable().isExist(dbKeyName)) {
+// Found a file in the given path.
+String errorMsg = "File " + dbKeyName + " exists with same name as " +
+" directory in path : " + pathName;
+throw new IOException(errorMsg);
+  } else if (omMetadataManager.getKeyTable().isExist(dbDirKeyName)) {
+// Found a directory in the given path. Higher parents must exist.
 
 Review comment:
   because clusters are not already deployed, if we introduce this change now, 
we do not have that problem. Of course we are referring to customer deployments 
here and not any internal test setups, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #503: HDDS-2850. Handle Create container use case in Recon.

2020-01-29 Thread GitBox
avijayanhwx commented on a change in pull request #503: HDDS-2850. Handle 
Create container use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/503#discussion_r372785181
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconIncrementalContainerReportHandler.java
 ##
 @@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.scm;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
+import org.apache.hadoop.hdds.scm.container.IncrementalContainerReportHandler;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import 
org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.IncrementalContainerReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.recon.spi.StorageContainerServiceProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Recon ICR handler.
+ */
+public class ReconIncrementalContainerReportHandler
+extends IncrementalContainerReportHandler {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+  ReconIncrementalContainerReportHandler.class);
+
+  private StorageContainerServiceProvider scmClient;
+
+  public ReconIncrementalContainerReportHandler(NodeManager nodeManager,
+  ContainerManager containerManager,
+  StorageContainerServiceProvider scmClient) {
+super(nodeManager, containerManager);
+this.scmClient = scmClient;
+  }
+
+  @Override
+  public void onMessage(final IncrementalContainerReportFromDatanode report,
+final EventPublisher publisher) {
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Processing incremental container report from data node {}",
+  report.getDatanodeDetails().getUuid());
 
 Review comment:
   Thanks for the suggestion, fixed!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2936) Hive queries fail at readFully

2020-01-29 Thread Shashikant Banerjee (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17026471#comment-17026471
 ] 

Shashikant Banerjee commented on HDDS-2936:
---

The issue happened because of wrong write length/offset calculation in the 
retry path in ozone client as a result of which a part of data exceeding the 
chunk boundary was not written to the datanodes in case it runs into any 
exception while writing a data chunk.

The integration tests on related to failure testing of ozone client happen to 
catch this bug but as these are disabled by default, it was not discovered 
earlier.

> Hive queries fail at readFully
> --
>
> Key: HDDS-2936
> URL: https://issues.apache.org/jira/browse/HDDS-2936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Istvan Fajth
>Assignee: Shashikant Banerjee
>Priority: Critical
>
> When running Hive queries on a 1TB dataset for TPC-DS tests, we started to 
> see an exception coming out from FSInputStream.readFully.
> This does not happen with a smaller 100GB dataset, so possibly multi block 
> long files are the cause of the trouble, and the issue was not seen with a 
> build from early december, so we most likely to blame a recent change since 
> then. The build I am running with is from the hash 
> 929f2f85d0379aab5aabeded8a4d3a505606 of master branch but with HDDS-2188 
> reverted from the code.
> The exception I see:
> {code}
> Error while running task ( failure ) : 
> attempt_1579615091731_0060_9_05_29_3:java.lang.RuntimeException: 
> java.lang.RuntimeException: java.io.IOException: java.io.EOFException: End of 
> file reached before reading fully.
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: java.io.IOException: 
> java.io.EOFException: End of file reached before reading fully.
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
> at 
> org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:703)
> at 
> org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:662)
> at 
> org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
> at 
> org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:532)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:178)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.init

[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #503: HDDS-2850. Handle Create container use case in Recon.

2020-01-29 Thread GitBox
avijayanhwx commented on a change in pull request #503: HDDS-2850. Handle 
Create container use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/503#discussion_r372784812
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
 ##
 @@ -194,6 +194,18 @@ public ContainerInfo getContainer(final ContainerID 
containerID)
 return containerStateManager.getContainer(containerID);
   }
 
+  @Override
+  public boolean exists(ContainerID containerID) {
+lock.lock();
 
 Review comment:
   Not sure. SCM ContainerManager has had a Reentrant lock from the first. I 
will investigate why it is not a RW lock, and create a new JIRA for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
xiaoyuyao commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372752982
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }

[GitHub] [hadoop-ozone] swagle commented on a change in pull request #503: HDDS-2850. Handle Create container use case in Recon.

2020-01-29 Thread GitBox
swagle commented on a change in pull request #503: HDDS-2850. Handle Create 
container use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/503#discussion_r372725292
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/scm/ReconIncrementalContainerReportHandler.java
 ##
 @@ -0,0 +1,108 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.scm;
+
+import java.io.IOException;
+
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.ContainerReplicaProto;
+import org.apache.hadoop.hdds.scm.container.ContainerID;
+import org.apache.hadoop.hdds.scm.container.ContainerManager;
+import org.apache.hadoop.hdds.scm.container.ContainerNotFoundException;
+import org.apache.hadoop.hdds.scm.container.IncrementalContainerReportHandler;
+import 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerWithPipeline;
+import org.apache.hadoop.hdds.scm.node.NodeManager;
+import org.apache.hadoop.hdds.scm.node.states.NodeNotFoundException;
+import 
org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher.IncrementalContainerReportFromDatanode;
+import org.apache.hadoop.hdds.server.events.EventPublisher;
+import org.apache.hadoop.ozone.recon.spi.StorageContainerServiceProvider;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Recon ICR handler.
+ */
+public class ReconIncrementalContainerReportHandler
+extends IncrementalContainerReportHandler {
+
+  private static final Logger LOG = LoggerFactory.getLogger(
+  ReconIncrementalContainerReportHandler.class);
+
+  private StorageContainerServiceProvider scmClient;
+
+  public ReconIncrementalContainerReportHandler(NodeManager nodeManager,
+  ContainerManager containerManager,
+  StorageContainerServiceProvider scmClient) {
+super(nodeManager, containerManager);
+this.scmClient = scmClient;
+  }
+
+  @Override
+  public void onMessage(final IncrementalContainerReportFromDatanode report,
+final EventPublisher publisher) {
+if (LOG.isDebugEnabled()) {
+  LOG.debug("Processing incremental container report from data node {}",
+  report.getDatanodeDetails().getUuid());
 
 Review comment:
   Minor: instead of getUuid() let toString method of DatanodeDetails determine 
what is printed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #503: HDDS-2850. Handle Create container use case in Recon.

2020-01-29 Thread GitBox
swagle commented on a change in pull request #503: HDDS-2850. Handle Create 
container use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/503#discussion_r372723418
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/SCMContainerManager.java
 ##
 @@ -194,6 +194,18 @@ public ContainerInfo getContainer(final ContainerID 
containerID)
 return containerStateManager.getContainer(containerID);
   }
 
+  @Override
+  public boolean exists(ContainerID containerID) {
+lock.lock();
 
 Review comment:
   This should likely be filed as separate Jira, but why is this not a 
ReadWriteLock?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao opened a new pull request #506: HDDS-2952. Ensure ozone manager service user is part of ozone.adminis…

2020-01-29 Thread GitBox
xiaoyuyao opened a new pull request #506: HDDS-2952. Ensure ozone manager 
service user is part of ozone.adminis…
URL: https://github.com/apache/hadoop-ozone/pull/506
 
 
   …trators.
   
   ## What changes were proposed in this pull request?
   
   Current we only add scm service principal to scmadmins at runtime. The ozone 
manager service principal is not honored as om admins. As a result, if user 
does not specify any user in 
   ozone.administrators, they will not be able to create a volume. 
   
   This pr is to add ozone manager SPN as om admin.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2952
   
   ## How was this patch tested?
   
   manual test with ozonesecure docker-compose.
   additional acceptance test added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2952) Ensure ozone manager service user is part of ozone.administrators

2020-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2952?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2952:
-
Labels: pull-request-available  (was: )

> Ensure ozone manager service user is part of ozone.administrators
> -
>
> Key: HDDS-2952
> URL: https://issues.apache.org/jira/browse/HDDS-2952
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Affects Versions: 0.5.0
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>
> Current we only add scm service principal to scmadmins at runtime. The ozone 
> manager service principal is not honored as om admins. As a result, if user 
> does not specify any user in 
> ozone.administrators, they will not be able to create a volume. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2959) Handle replay of OM Key ACL requests

2020-01-29 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2959:


 Summary: Handle replay of OM Key ACL requests
 Key: HDDS-2959
 URL: https://issues.apache.org/jira/browse/HDDS-2959
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


To ensure that Key acl operations are idempotent, compare the transactionID 
with the objectID and updateID to make sure that the transaction is not a 
replay. If the transactionID <= updateID, then it implies that the transaction 
is a replay and hence it should be skipped.

OMKeyAclRequests (Add, Remove and Set ACL requests) are made idempotent in this 
Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2958) Handle replay of OM Volume ACL requests

2020-01-29 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2958:


 Summary: Handle replay of OM Volume ACL requests
 Key: HDDS-2958
 URL: https://issues.apache.org/jira/browse/HDDS-2958
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


To ensure that volume acl operations are idempotent, compare the transactionID 
with the objectID and updateID to make sure that the transaction is not a 
replay. If the transactionID <= updateID, then it implies that the transaction 
is a replay and hence it should be skipped.

OMVolumeAclRequests (Add, Remove and Set ACL requests) are made idempotent in 
this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2956) Handle Replay of AllocateBlock request

2020-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2956:
-
Labels: pull-request-available  (was: )

> Handle Replay of AllocateBlock request
> --
>
> Key: HDDS-2956
> URL: https://issues.apache.org/jira/browse/HDDS-2956
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> To ensure that allocate block operations are idempotent, compare the 
> transactionID with the objectID and updateID to make sure that the 
> transaction is not a replay. If the transactionID <= updateID, then it 
> implies that the transaction is a replay and hence it should be skipped.
> OMAllocateBlockRequest is made idempotent in this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru opened a new pull request #505: HDDS-2956. Handle Replay of AllocateBlock request

2020-01-29 Thread GitBox
hanishakoneru opened a new pull request #505: HDDS-2956. Handle Replay of 
AllocateBlock request
URL: https://github.com/apache/hadoop-ozone/pull/505
 
 
   ## What changes were proposed in this pull request?
   
   To ensure that allocate block operations are idempotent, compare the 
transactionID with the objectID and updateID to make sure that the transaction 
is not a replay. If the transactionID <= updateID, then it implies that the 
transaction is a replay and hence it should be skipped.
   
   OMAllocateBlockRequest is made idempotent in this Jira.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2956
   
   ## How was this patch tested?
   
   Unit test added.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets result excludes the exact match of bucketPrefix

2020-01-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Summary: listBuckets result excludes the exact match of bucketPrefix  (was: 
listBuckets result excludes the exact match of the bucket prefix)

> listBuckets result excludes the exact match of bucketPrefix
> ---
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> Please see my attached test case for this. - I know {{TestOzoneFileSystem}} 
> won't be the best place for this unit test. Just to prove a point here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets result excludes the exact match of the bucket prefix

2020-01-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Summary: listBuckets result excludes the exact match of the bucket prefix  
(was: listBuckets result excludes exact match of given bucket prefix)

> listBuckets result excludes the exact match of the bucket prefix
> 
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> Please see my attached test case for this. - I know {{TestOzoneFileSystem}} 
> won't be the best place for this unit test. Just to prove a point here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets result excludes exact match of given bucket prefix

2020-01-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Summary: listBuckets result excludes exact match of given bucket prefix  
(was: listBuckets excludes exact match of the bucket prefix from result)

> listBuckets result excludes exact match of given bucket prefix
> --
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> Please see my attached test case for this. - I know {{TestOzoneFileSystem}} 
> won't be the best place for this unit test. Just to prove a point here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets bucketPrefix excludes the parameter itself

2020-01-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Attachment: HDDS-2957.test.patch

> listBuckets bucketPrefix excludes the parameter itself
> --
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> Please see my attached test case for this. - I know {{TestOzoneFileSystem}} 
> won't be the best place for this unit test. Just to prove a point here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2957) listBuckets excludes exact match of the bucket prefix from result

2020-01-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2957:
-
Summary: listBuckets excludes exact match of the bucket prefix from result  
(was: listBuckets bucketPrefix excludes the parameter itself)

> listBuckets excludes exact match of the bucket prefix from result
> -
>
> Key: HDDS-2957
> URL: https://issues.apache.org/jira/browse/HDDS-2957
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: HDDS-2957.test.patch
>
>
> {{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
> {{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
> short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
> *exact* match, while {{listVolumes}} doesn't.
> Please see my attached test case for this. - I know {{TestOzoneFileSystem}} 
> won't be the best place for this unit test. Just to prove a point here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2957) listBuckets bucketPrefix excludes the parameter itself

2020-01-29 Thread Siyao Meng (Jira)
Siyao Meng created HDDS-2957:


 Summary: listBuckets bucketPrefix excludes the parameter itself
 Key: HDDS-2957
 URL: https://issues.apache.org/jira/browse/HDDS-2957
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Siyao Meng
Assignee: Siyao Meng


{{OzoneVolume.listBuckets(String bucketPrefix)}} behaves differently than 
{{ObjectStore.listVolumes(String volumePrefix)}} in terms of given prefix. In 
short, {{listBuckets}} ignores the {{bucketPrefix}} in the result if it is an 
*exact* match, while {{listVolumes}} doesn't.

Please see my attached test case for this. - I know {{TestOzoneFileSystem}} 
won't be the best place for this unit test. Just to prove a point here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2781) Add ObjectID and updateID to BucketInfo to avoid replaying transactions

2020-01-29 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-2781:
-
Parent: HDDS-505
Issue Type: Sub-task  (was: Bug)

> Add ObjectID and updateID to BucketInfo to avoid replaying transactions
> ---
>
> Key: HDDS-2781
> URL: https://issues.apache.org/jira/browse/HDDS-2781
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This Jira has 2 objectives:
> 1. Add objectID and updateID to BucketInfo proto persisted to DB.
> 2. To ensure that bucket operations are idempotent, compare the transactionID 
> with the objectID and updateID to make sure that the transaction is not a 
> replay. If the transactionID <= updateID, then it implies that the 
> transaction is a replay and hence it should be skipped.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2956) Handle Replay of AllocateBlock request

2020-01-29 Thread Hanisha Koneru (Jira)
Hanisha Koneru created HDDS-2956:


 Summary: Handle Replay of AllocateBlock request
 Key: HDDS-2956
 URL: https://issues.apache.org/jira/browse/HDDS-2956
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru


To ensure that allocate block operations are idempotent, compare the 
transactionID with the objectID and updateID to make sure that the transaction 
is not a replay. If the transactionID <= updateID, then it implies that the 
transaction is a replay and hence it should be skipped.

OMAllocateBlockRequest is made idempotent in this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2850) Handle Create container use case in Recon.

2020-01-29 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2850:

Status: Patch Available  (was: In Progress)

> Handle Create container use case in Recon.
> --
>
> Key: HDDS-2850
> URL: https://issues.apache.org/jira/browse/HDDS-2850
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-2850-001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CREATE container needs to be handled differently in Recon since these actions 
> are initiated in the SCM, and Recon does not know about that. It should not 
> throw ContainerNotFoundException when it suddenly sees a new container.
> The idea is to let Recon ask SCM about the new container whenever it sees a 
> new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2894) Handle replay of KeyDelete and KeyRename Requests

2020-01-29 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2894.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Handle replay of KeyDelete and KeyRename Requests
> -
>
> Key: HDDS-2894
> URL: https://issues.apache.org/jira/browse/HDDS-2894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> To ensure that key deletion and rename operations are idempotent, compare the 
> transactionID with the objectID and updateID to make sure that the 
> transaction is not a replay. If the transactionID <= updateID, then it 
> implies that the transaction is a replay and hence it should be skipped.
> OMKeyDeleteRequest and OMKeyRenameRequest are made idempotent in this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #452: HDDS-2894. Handle replay of KeyDelete and KeyRename Requests

2020-01-29 Thread GitBox
bharatviswa504 commented on issue #452: HDDS-2894. Handle replay of KeyDelete 
and KeyRename Requests
URL: https://github.com/apache/hadoop-ozone/pull/452#issuecomment-580020526
 
 
   Thank You @hanishakoneru for the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #452: HDDS-2894. Handle replay of KeyDelete and KeyRename Requests

2020-01-29 Thread GitBox
bharatviswa504 merged pull request #452: HDDS-2894. Handle replay of KeyDelete 
and KeyRename Requests
URL: https://github.com/apache/hadoop-ozone/pull/452
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2894) Handle replay of KeyDelete and KeyRename Requests

2020-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2894:
-
Labels: pull-request-available  (was: )

> Handle replay of KeyDelete and KeyRename Requests
> -
>
> Key: HDDS-2894
> URL: https://issues.apache.org/jira/browse/HDDS-2894
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> To ensure that key deletion and rename operations are idempotent, compare the 
> transactionID with the objectID and updateID to make sure that the 
> transaction is not a replay. If the transactionID <= updateID, then it 
> implies that the transaction is a replay and hence it should be skipped.
> OMKeyDeleteRequest and OMKeyRenameRequest are made idempotent in this Jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #500: HDDS-2931. Recon integration test should use ephemeral port for HTTP server.

2020-01-29 Thread GitBox
avijayanhwx commented on issue #500: HDDS-2931. Recon integration test should 
use ephemeral port for HTTP server.
URL: https://github.com/apache/hadoop-ozone/pull/500#issuecomment-580012717
 
 
   Thanks for the fix @swagle. LGTM +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #502: HDDS-2955. Unnecessary log messages in DBStoreBuilder

2020-01-29 Thread GitBox
avijayanhwx commented on issue #502: HDDS-2955. Unnecessary log messages in 
DBStoreBuilder
URL: https://github.com/apache/hadoop-ozone/pull/502#issuecomment-579997161
 
 
   LGTM +1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru opened a new pull request #504: HDDS-2953. Handle replay of S3 requests

2020-01-29 Thread GitBox
hanishakoneru opened a new pull request #504: HDDS-2953. Handle replay of S3 
requests
URL: https://github.com/apache/hadoop-ozone/pull/504
 
 
   ## What changes were proposed in this pull request?
   
   To ensure that S3 operations is idempotent, compare the transactionID with 
the objectID and updateID to make sure that the transaction is not a replay. If 
the transactionID <= updateID, then it implies that the transaction is a replay 
and hence it should be skipped.
   
   In this Jira, the following requests are made idempotent:
   
   S3InitiateMultipartUploadRequest
   S3MultipartUploadCommitPartRequest
   S3MultipartUploadCompleteRequest
   S3MultipartUploadAbortRequest
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2953
   
   ## How was this patch tested?
   
   Will add unit tests in next commit


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2953) Handle replay of S3 requests

2020-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2953:
-
Labels: pull-request-available  (was: )

> Handle replay of S3 requests
> 
>
> Key: HDDS-2953
> URL: https://issues.apache.org/jira/browse/HDDS-2953
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> To ensure that S3 operations is idempotent, compare the transactionID with 
> the objectID and updateID to make sure that the transaction is not a replay. 
> If the transactionID <= updateID, then it implies that the transaction is a 
> replay and hence it should be skipped.
> In this Jira, the following requests are made idempotent:
> * S3InitiateMultipartUploadRequest
> * S3MultipartUploadCommitPartRequest
> * S3MultipartUploadCompleteRequest
> * S3MultipartUploadAbortRequest



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #466: HDDS-2869. Handle pipeline bootstrap from SCM and create pipeline use case in Recon.

2020-01-29 Thread GitBox
avijayanhwx commented on issue #466: HDDS-2869. Handle pipeline bootstrap from 
SCM and create pipeline use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/466#issuecomment-579919000
 
 
   > Thanks @adoroszlai for actually trying it out :-) 👍
   
   +1. Thanks @adoroszlai 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #503: HDDS-2850. Handle Create container use case in Recon.

2020-01-29 Thread GitBox
avijayanhwx opened a new pull request #503: HDDS-2850. Handle Create container 
use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/503
 
 
   ## What changes were proposed in this pull request?
   CREATE container needs to be handled differently in Recon since these 
actions are initiated in the SCM, and Recon does not know about that. It should 
not throw ContainerNotFoundException when it suddenly sees a new container. The 
idea is to let Recon ask SCM about the new container whenever it sees a new one.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2850
   
   ## How was this patch tested?
   Manually tested.
   Unit tested.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2850) Handle Create container use case in Recon.

2020-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2850:
-
Labels: pull-request-available  (was: )

> Handle Create container use case in Recon.
> --
>
> Key: HDDS-2850
> URL: https://issues.apache.org/jira/browse/HDDS-2850
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-2850-001.patch
>
>
> CREATE container needs to be handled differently in Recon since these actions 
> are initiated in the SCM, and Recon does not know about that. It should not 
> throw ContainerNotFoundException when it suddenly sees a new container.
> The idea is to let Recon ask SCM about the new container whenever it sees a 
> new one.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on issue #466: HDDS-2869. Handle pipeline bootstrap from SCM and create pipeline use case in Recon.

2020-01-29 Thread GitBox
swagle commented on issue #466: HDDS-2869. Handle pipeline bootstrap from SCM 
and create pipeline use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/466#issuecomment-579908070
 
 
   Thanks @adoroszlai for actually trying it out :-) 👍 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2926) Intermittent failure in TestRecon due to thread timing

2020-01-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2926:
---
Labels:   (was: pull-request-available)

> Intermittent failure in TestRecon due to thread timing
> --
>
> Key: HDDS-2926
> URL: https://issues.apache.org/jira/browse/HDDS-2926
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Attila Doroszlai
>Assignee: Siddharth Wagle
>Priority: Minor
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> {{TestRecon}} uses {{Thread#sleep}} to wait for async task (OM snapshot fetch 
> from Recon) completion.  This can result in failure in case of bad timing:
> {code}
> 2020-01-22T07:28:21.5231608Z [ERROR] Tests run: 1, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 45.857 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.recon.TestRecon
> 2020-01-22T07:28:21.5236225Z [ERROR] 
> testReconServer(org.apache.hadoop.ozone.recon.TestRecon)  Time elapsed: 
> 10.269 s  <<< FAILURE!
> 2020-01-22T07:28:21.5237314Z java.lang.AssertionError: expected:<1> but 
> was:<0>
> ...
> 2020-01-22T07:28:21.5241907Z  at 
> org.apache.hadoop.ozone.recon.TestRecon.testReconServer(TestRecon.java:205)
> {code}
> {{GenericTestUtils#waitFor}} or similar solution should be preferred.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2955) Unnecessary log messages in DBStoreBuilder

2020-01-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2955:
---
Status: Patch Available  (was: In Progress)

> Unnecessary log messages in DBStoreBuilder
> --
>
> Key: HDDS-2955
> URL: https://issues.apache.org/jira/browse/HDDS-2955
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> DBStoreBuilder logs some table-related at INFO level.  This is fine for DBs 
> that are created once per run, eg. OM or SCM, but Recon builds a new DB for 
> each OM snapshot:
> {code}
> recon_1 | 2020-01-29 15:20:32,466 [pool-7-thread-1] INFO 
> impl.OzoneManagerServiceProviderImpl: Got new checkpoint from OM : 
> /data/metadata/recon/om.snapshot.db_1580311232241
> recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: userTable
> recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:userTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: volumeTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:volumeTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: bucketTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:bucketTable
> recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: keyTable
> recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:keyTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: deletedTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:deletedTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: openKeyTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:openKeyTable
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: s3Table
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:s3Table
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: multipartInfoTable
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:multipartInfoTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: dTokenTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:dTokenTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: s3SecretTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:s3SecretTable
> recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: prefixTable
> recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:prefixTable
> recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: default
> recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:default
> recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default options. DBProfile.DISK
> recon_1 | 2020-01-29 15:20:32,514 [pool-7-thread-1] INFO 
> recovery.ReconOmMetadataManagerImpl: Created OM DB snapshot at 
> /data/metadata/recon/om.snapshot.db_1580311232241.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubs

[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #502: HDDS-2955. Unnecessary log messages in DBStoreBuilder

2020-01-29 Thread GitBox
adoroszlai opened a new pull request #502: HDDS-2955. Unnecessary log messages 
in DBStoreBuilder
URL: https://github.com/apache/hadoop-ozone/pull/502
 
 
   ## What changes were proposed in this pull request?
   
   1. Reduce log level of "using default/custom profile ..." messages to 
`debug`.
   2. Avoid logging both "default" and "custom" for the same table by 
refactoring the code.
   3. Add constant for `String` version of `DEFAULT_COLUMN_FAMILY`.
   
   https://issues.apache.org/jira/browse/HDDS-2955
   
   ## How was this patch tested?
   
   Verified log in compose cluster.
   
   with default info level:
   
   ```
   recon_1 | 2020-01-29 17:14:39,085 [pool-8-thread-1] INFO 
impl.OzoneManagerServiceProviderImpl: Got new checkpoint from OM : 
/data/metadata/recon/om.snapshot.db_1580318078998
   recon_1 | 2020-01-29 17:14:39,085 [pool-8-thread-1] INFO 
recovery.ReconOmMetadataManagerImpl: Cleaning up old OM snapshot db at 
/data/metadata/recon/om.snapshot.db_1580317479076.
   recon_1 | 2020-01-29 17:14:39,231 [pool-8-thread-1] INFO 
recovery.ReconOmMetadataManagerImpl: Created OM DB handle from snapshot at 
/data/metadata/recon/om.snapshot.db_1580318078998.
   ```
   
   and with debug level:
   
   ```
   recon_1 | 2020-01-29 17:00:12,434 [pool-8-thread-1] INFO 
impl.OzoneManagerServiceProviderImpl: Got new checkpoint from OM : 
/data/metadata/recon/om.snapshot.db_1580317212331
   recon_1 | 2020-01-29 17:00:12,434 [pool-8-thread-1] INFO 
recovery.ReconOmMetadataManagerImpl: Cleaning up old OM snapshot db at 
/data/metadata/recon/om.snapshot.db_1580316612245.
   recon_1 | 2020-01-29 17:00:12,438 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: default profile:DBProfile.DISK
   recon_1 | 2020-01-29 17:00:12,438 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:userTable
   recon_1 | 2020-01-29 17:00:12,442 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:volumeTable
   recon_1 | 2020-01-29 17:00:12,442 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:bucketTable
   recon_1 | 2020-01-29 17:00:12,443 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:keyTable
   recon_1 | 2020-01-29 17:00:12,443 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:deletedTable
   recon_1 | 2020-01-29 17:00:12,443 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:openKeyTable
   recon_1 | 2020-01-29 17:00:12,444 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:s3Table
   recon_1 | 2020-01-29 17:00:12,444 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:multipartInfoTable
   recon_1 | 2020-01-29 17:00:12,444 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:dTokenTable
   recon_1 | 2020-01-29 17:00:12,445 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:s3SecretTable
   recon_1 | 2020-01-29 17:00:12,445 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:prefixTable
   recon_1 | 2020-01-29 17:00:12,447 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: using default profile for table:default
   recon_1 | 2020-01-29 17:00:12,451 [pool-8-thread-1] DEBUG 
db.DBStoreBuilder: Using default options: DBProfile.DISK
   recon_1 | 2020-01-29 17:00:12,561 [pool-8-thread-1] INFO 
recovery.ReconOmMetadataManagerImpl: Created OM DB handle from snapshot at 
/data/metadata/recon/om.snapshot.db_1580317212331.
   ```
   
   https://github.com/adoroszlai/hadoop-ozone/runs/415576820


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2955) Unnecessary log messages in DBStoreBuilder

2020-01-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2955:
-
Labels: pull-request-available  (was: )

> Unnecessary log messages in DBStoreBuilder
> --
>
> Key: HDDS-2955
> URL: https://issues.apache.org/jira/browse/HDDS-2955
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Minor
>  Labels: pull-request-available
>
> DBStoreBuilder logs some table-related at INFO level.  This is fine for DBs 
> that are created once per run, eg. OM or SCM, but Recon builds a new DB for 
> each OM snapshot:
> {code}
> recon_1 | 2020-01-29 15:20:32,466 [pool-7-thread-1] INFO 
> impl.OzoneManagerServiceProviderImpl: Got new checkpoint from OM : 
> /data/metadata/recon/om.snapshot.db_1580311232241
> recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: userTable
> recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:userTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: volumeTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:volumeTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: bucketTable
> recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:bucketTable
> recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: keyTable
> recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:keyTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: deletedTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:deletedTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: openKeyTable
> recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:openKeyTable
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: s3Table
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:s3Table
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: multipartInfoTable
> recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:multipartInfoTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: dTokenTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:dTokenTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: s3SecretTable
> recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:s3SecretTable
> recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: prefixTable
> recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:prefixTable
> recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: using custom profile for table: default
> recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default column profile:DBProfile.DISK for 
> Table:default
> recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO 
> db.DBStoreBuilder: Using default options. DBProfile.DISK
> recon_1 | 2020-01-29 15:20:32,514 [pool-7-thread-1] INFO 
> recovery.ReconOmMetadataManagerImpl: Created OM DB snapshot at 
> /data/metadata/recon/om.snapshot.db_1580311232241.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozon

[jira] [Updated] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://

2020-01-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2665:
-
Labels:   (was: HDDS-2665)

> Implement new Ozone Filesystem scheme ofs://
> 
>
> Key: HDDS-2665
> URL: https://issues.apache.org/jira/browse/HDDS-2665
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
> Attachments: Design ofs v1.pdf
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement a new scheme for Ozone Filesystem where all volumes (and buckets) 
> can be access from a single root.
> Alias: Rooted Ozone Filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2665) Implement new Ozone Filesystem scheme ofs://

2020-01-29 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2665:
-
Labels: HDDS-2665  (was: )

> Implement new Ozone Filesystem scheme ofs://
> 
>
> Key: HDDS-2665
> URL: https://issues.apache.org/jira/browse/HDDS-2665
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: HDDS-2665
> Attachments: Design ofs v1.pdf
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Implement a new scheme for Ozone Filesystem where all volumes (and buckets) 
> can be access from a single root.
> Alias: Rooted Ozone Filesystem.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on issue #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on issue #415: HDDS-2840. Implement ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#issuecomment-579899200
 
 
   Should be almost good to go. We can do one more round of checking using diff 
https://github.com/smengcl/hadoop-ozone/compare/0bee28acadfb2c358c6f008173e7eca6ed7fa23f...smengcl:HDDS-2840


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372559181
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372556302
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372556019
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372555375
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372552921
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372552176
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372549616
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372547250
 
 

 ##
 File path: 
hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
 ##
 @@ -0,0 +1,694 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.ozone;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.net.URI;
+import java.util.ArrayList;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.commons.collections.CollectionUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.crypto.key.KeyProvider;
+import org.apache.hadoop.fs.BlockLocation;
+import org.apache.hadoop.fs.FileAlreadyExistsException;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdds.client.ReplicationFactor;
+import org.apache.hadoop.hdds.client.ReplicationType;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.DatanodeDetails;
+import org.apache.hadoop.hdds.security.x509.SecurityConfig;
+import org.apache.hadoop.io.Text;
+import org.apache.hadoop.ozone.OmUtils;
+import org.apache.hadoop.ozone.OzoneConfigKeys;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneClientFactory;
+import org.apache.hadoop.ozone.client.OzoneKey;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.io.OzoneOutputStream;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyLocationInfoGroup;
+import org.apache.hadoop.ozone.om.helpers.OzoneFileStatus;
+import org.apache.hadoop.ozone.security.OzoneTokenIdentifier;
+import org.apache.hadoop.security.token.Token;
+import org.apache.hadoop.security.token.TokenRenewer;
+
+import org.apache.commons.lang3.StringUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_ALREADY_EXISTS;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.VOLUME_NOT_FOUND;
+import static org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes
+.BUCKET_NOT_FOUND;
+
+/**
+ * Basic Implementation of the OzoneFileSystem calls.
+ * 
+ * This is the minimal version which doesn't include any statistics.
+ * 
+ * For full featured version use OzoneClientAdapterImpl.
+ */
+public class BasicRootedOzoneClientAdapterImpl
+implements RootedOzoneClientAdapter {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(BasicRootedOzoneClientAdapterImpl.class);
+
+  private OzoneClient ozoneClient;
+  private ClientProtocol proxy;
+  private ObjectStore objectStore;
+  private ReplicationType replicationType;
+  private ReplicationFactor replicationFactor;
+  private boolean securityEnabled;
+  private int configuredDnPort;
+
+  /**
+   * Create new OzoneClientAdapter implementation.
+   *
+   * @throws IOException In case of a problem.
+   */
+  public BasicRootedOzoneClientAdapterImpl() throws IOException {
+this(createConf());
+  }
+
+  private static OzoneConfiguration createConf() {
+ClassLoader contextClassLoader =
+Thread.currentThread().getContextClassLoader();
+Thread.currentThread().setContextClassLoader(null);
+try {
+  return new OzoneConfiguration();
+} finally {
+  Thread.currentThread().setContextClassLoader(contextClassLoader);
+}
+  }
+
+  public BasicRootedOzoneClientAdapterImpl(OzoneConfiguration conf)
+  throws IOException {
+this(null, -1, conf);
+  }
+

[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #415: HDDS-2840. Implement ofs://: mkdir

2020-01-29 Thread GitBox
smengcl commented on a change in pull request #415: HDDS-2840. Implement 
ofs://: mkdir
URL: https://github.com/apache/hadoop-ozone/pull/415#discussion_r372543587
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
 ##
 @@ -0,0 +1,477 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.ozone;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.hadoop.fs.CommonConfigurationKeysPublic;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.TestDataUtil;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClientException;
+import org.apache.hadoop.ozone.client.OzoneKeyDetails;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.Set;
+import java.util.TreeSet;
+
+import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+/**
+ * Ozone file system tests that are not covered by contract tests.
+ */
+public class TestRootedOzoneFileSystem {
 
 Review comment:
   Added TODO.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2936) Hive queries fail at readFully

2020-01-29 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee reassigned HDDS-2936:
-

Assignee: Shashikant Banerjee  (was: Istvan Fajth)

> Hive queries fail at readFully
> --
>
> Key: HDDS-2936
> URL: https://issues.apache.org/jira/browse/HDDS-2936
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Istvan Fajth
>Assignee: Shashikant Banerjee
>Priority: Critical
>
> When running Hive queries on a 1TB dataset for TPC-DS tests, we started to 
> see an exception coming out from FSInputStream.readFully.
> This does not happen with a smaller 100GB dataset, so possibly multi block 
> long files are the cause of the trouble, and the issue was not seen with a 
> build from early december, so we most likely to blame a recent change since 
> then. The build I am running with is from the hash 
> 929f2f85d0379aab5aabeded8a4d3a505606 of master branch but with HDDS-2188 
> reverted from the code.
> The exception I see:
> {code}
> Error while running task ( failure ) : 
> attempt_1579615091731_0060_9_05_29_3:java.lang.RuntimeException: 
> java.lang.RuntimeException: java.io.IOException: java.io.EOFException: End of 
> file reached before reading fully.
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250)
> at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:422)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1876)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
> at 
> org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
> at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108)
> at 
> com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41)
> at 
> com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: java.io.IOException: 
> java.io.EOFException: End of file reached before reading fully.
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145)
> at 
> org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157)
> at 
> org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83)
> at 
> org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:703)
> at 
> org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:662)
> at 
> org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150)
> at 
> org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:532)
> at 
> org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:178)
> at 
> org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266)
> ... 16 more
> Caused by: java.io.IOException: java.io.EOFException: End of file reached 
> before reading fully.
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97)
> at 
> org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExcep

[GitHub] [hadoop-ozone] adoroszlai commented on issue #466: HDDS-2869. Handle pipeline bootstrap from SCM and create pipeline use case in Recon.

2020-01-29 Thread GitBox
adoroszlai commented on issue #466: HDDS-2869. Handle pipeline bootstrap from 
SCM and create pipeline use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/466#issuecomment-579865727
 
 
   Thanks @avijayanhwx for the contribution, and @arp7 and @swagle for the 
review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2869) Handle pipeline bootstrap from SCM and create pipeline use case in Recon

2020-01-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2869:
---
Resolution: Implemented
Status: Resolved  (was: Patch Available)

> Handle pipeline bootstrap from SCM and create pipeline use case in Recon
> 
>
> Key: HDDS-2869
> URL: https://issues.apache.org/jira/browse/HDDS-2869
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Whenever Recon starts up, it asks SCM for the set of pipelines it has. CREATE 
> pipeline needs to be handled explicitly in Recon since it is initiated in the 
> SCM, and Recon does not know about that. Whenever Recon sees a new pipeline, 
> it will make an RPC call to SCM to check if it is a new pipeline. it is not 
> possible to get a heartbeat from Datanode containing a pipeline ID that SCM 
> does not yet know about.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2869) Handle pipeline bootstrap from SCM and create pipeline use case in Recon

2020-01-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2869?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2869:
---
Labels:   (was: pull-request-available)

> Handle pipeline bootstrap from SCM and create pipeline use case in Recon
> 
>
> Key: HDDS-2869
> URL: https://issues.apache.org/jira/browse/HDDS-2869
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Recon
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Whenever Recon starts up, it asks SCM for the set of pipelines it has. CREATE 
> pipeline needs to be handled explicitly in Recon since it is initiated in the 
> SCM, and Recon does not know about that. Whenever Recon sees a new pipeline, 
> it will make an RPC call to SCM to check if it is a new pipeline. it is not 
> possible to get a heartbeat from Datanode containing a pipeline ID that SCM 
> does not yet know about.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #466: HDDS-2869. Handle pipeline bootstrap from SCM and create pipeline use case in Recon.

2020-01-29 Thread GitBox
adoroszlai commented on issue #466: HDDS-2869. Handle pipeline bootstrap from 
SCM and create pipeline use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/466#issuecomment-579865095
 
 
   > I have not wired up the DN-> Recon path by default
   
   Thanks, that explains it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #466: HDDS-2869. Handle pipeline bootstrap from SCM and create pipeline use case in Recon.

2020-01-29 Thread GitBox
adoroszlai merged pull request #466: HDDS-2869. Handle pipeline bootstrap from 
SCM and create pipeline use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/466
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on issue #466: HDDS-2869. Handle pipeline bootstrap from SCM and create pipeline use case in Recon.

2020-01-29 Thread GitBox
avijayanhwx commented on issue #466: HDDS-2869. Handle pipeline bootstrap from 
SCM and create pipeline use case in Recon.
URL: https://github.com/apache/hadoop-ozone/pull/466#issuecomment-579863624
 
 
   > Thanks @avijayanhwx for implementing this. I tried it locally on a 
docker-compose cluster and found that Recon does not get new pipeline 
information.
   > 
   > 1. Since now the recon container does not wait with startup, it received 0 
pipelines initially:
   >```
   >recon_1 | 2020-01-29 08:41:05,263 [main] INFO 
scm.ReconStorageContainerManagerFacade: Obtained 0 pipelines from SCM.
   >recon_1 | 2020-01-29 08:41:05,264 [main] INFO 
scm.ReconPipelineManager: Recon has 0 pipelines in house.
   >```
   >
   >
   >That would be OK, but when the pipelines were created a bit later, they 
did not show up in Recon.  (I also checked with `WAITFOR` restored, 3+1 initial 
pipelines were received in that case, but I feel it's a bit timing-dependent.)
   > 2. Closed the initial 3-node pipeline via `scmcli`, which eventually 
triggered creation of new one in SCM, but Recon never noticed.
   > 
   > Can you please check?
   
   @adoroszlai I have not wired up the DN-> Recon path by default since that 
may cause some unintended test failures before finishing the whole "Recon as a 
Passive SCM" feature. In my local, I test with the following configs added. 
   
   > OZONE-SITE.XML_ozone.recon.datanode.address=recon:9891
   > OZONE-SITE.XML_ozone.recon.address=recon:9891
   > OZONE-SITE.XML_ozone.recon.datanode.bind.host=recon 
   
   That will lead to pipelines being picked up by Recon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2955) Unnecessary log messages in DBStoreBuilder

2020-01-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-2955:
---
Description: 
DBStoreBuilder logs some table-related at INFO level.  This is fine for DBs 
that are created once per run, eg. OM or SCM, but Recon builds a new DB for 
each OM snapshot:

{code}
recon_1 | 2020-01-29 15:20:32,466 [pool-7-thread-1] INFO 
impl.OzoneManagerServiceProviderImpl: Got new checkpoint from OM : 
/data/metadata/recon/om.snapshot.db_1580311232241
recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: userTable
recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:userTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: volumeTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:volumeTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: bucketTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:bucketTable
recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: keyTable
recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:keyTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: deletedTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:deletedTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: openKeyTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:openKeyTable
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: s3Table
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:s3Table
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: multipartInfoTable
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:multipartInfoTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: dTokenTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:dTokenTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: s3SecretTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:s3SecretTable
recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: prefixTable
recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:prefixTable
recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: default
recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:default
recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default options. DBProfile.DISK
recon_1 | 2020-01-29 15:20:32,514 [pool-7-thread-1] INFO 
recovery.ReconOmMetadataManagerImpl: Created OM DB snapshot at 
/data/metadata/recon/om.snapshot.db_1580311232241.
{code}

  was:
DBStoreBuilder logs some table-related at INFO level.  This is fine for DBs 
that are created once per run, eg. OM or SCM, but Recon builds a new DB it for 
each OM snapshot:

{code}
recon_1 | 2020-01-29 15:20:32,466 [pool-7-thread-1] INFO 
impl.OzoneManagerServiceProviderImpl: Got new checkpoint from OM : 
/data/metadata/recon/om.snapshot.db_1580311232241
recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: userTable
recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:userTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: volumeTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default

[jira] [Created] (HDDS-2955) Unnecessary log messages in DBStoreBuilder

2020-01-29 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-2955:
--

 Summary: Unnecessary log messages in DBStoreBuilder
 Key: HDDS-2955
 URL: https://issues.apache.org/jira/browse/HDDS-2955
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


DBStoreBuilder logs some table-related at INFO level.  This is fine for DBs 
that are created once per run, eg. OM or SCM, but Recon builds a new DB it for 
each OM snapshot:

{code}
recon_1 | 2020-01-29 15:20:32,466 [pool-7-thread-1] INFO 
impl.OzoneManagerServiceProviderImpl: Got new checkpoint from OM : 
/data/metadata/recon/om.snapshot.db_1580311232241
recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: userTable
recon_1 | 2020-01-29 15:20:32,475 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:userTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: volumeTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:volumeTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: bucketTable
recon_1 | 2020-01-29 15:20:32,476 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:bucketTable
recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: keyTable
recon_1 | 2020-01-29 15:20:32,477 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:keyTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: deletedTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:deletedTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: openKeyTable
recon_1 | 2020-01-29 15:20:32,478 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:openKeyTable
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: s3Table
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:s3Table
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: multipartInfoTable
recon_1 | 2020-01-29 15:20:32,479 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:multipartInfoTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: dTokenTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:dTokenTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: s3SecretTable
recon_1 | 2020-01-29 15:20:32,480 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:s3SecretTable
recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: prefixTable
recon_1 | 2020-01-29 15:20:32,481 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:prefixTable
recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO db.DBStoreBuilder: 
using custom profile for table: default
recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default column profile:DBProfile.DISK for Table:default
recon_1 | 2020-01-29 15:20:32,482 [pool-7-thread-1] INFO db.DBStoreBuilder: 
Using default options. DBProfile.DISK
recon_1 | 2020-01-29 15:20:32,514 [pool-7-thread-1] INFO 
recovery.ReconOmMetadataManagerImpl: Created OM DB snapshot at 
/data/metadata/recon/om.snapshot.db_1580311232241.
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2946) Rename audit log should contain both srcKey and dstKey not just key

2020-01-29 Thread Istvan Fajth (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Fajth reassigned HDDS-2946:
--

Assignee: Istvan Fajth

> Rename audit log should contain both srcKey and dstKey not just key
> ---
>
> Key: HDDS-2946
> URL: https://issues.apache.org/jira/browse/HDDS-2946
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>  Labels: newbie
>
> Currently a rename key operation logs just the key to be renamed, in the 
> audit log there should be the source and destination present as well for a 
> rename operation if we want to have traceability over a file properly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-2939) Ozone FS namespace

2020-01-29 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025842#comment-17025842
 ] 

Yiqun Lin edited comment on HDDS-2939 at 1/29/20 12:46 PM:
---

Hi [~sdeka], I am reading for this design doc, some comments from me:

For the Filesystem Namespace Operations, the ls(list files/folders) operation 
will also a common operation. But under current implementation, for example, 
list a directory, we have to traverse the whole directory/file table to lookup 
the child file/sub-folders.This is an ineffective way. I know the  
lookup way can greatly reduce the memory used. but this is not friendly for the 
ls operation.

Do we have any other improvement for this? Can we additionally store the child 
ID for each record in directory table? That can help us quickly find the child 
file or child folder.

 
{quote}Associating a lock with each parent prefix being accessed by an 
operation in the OM, is sufficient to control concurrent operations on the same 
prefix. When the OM starts to process create “/a/b/c/1.txt”, a prefix lock is 
taken for “/a/b/c”...
{quote}
For the concurrency control, we create the lock for each parent prefix level. 
There will be large number of lock instances to be maintained in OM memory once 
there are millions of directory folders. Current way is so fine-grained lock 
way, have we considered about the partition namespace way? Divided the whole 
namespace into logic sub-namespaces by the prefix key. Then each sub-namespace 
will have its lock. This is a compromise approach than just having a global 
exclusive lock or having uncontrollable number of locks that depended on parent 
prefix's number.

Is there a future plan to have a way(API or command Tool) to convert object key 
to Ozone FS namespace? Because object store is now the major use case for the 
users. Maybe users want to use a filesystem way to access the data without 
moving their data.


was (Author: linyiqun):
Hi [~sdeka], I am reading for this design doc, some comments from me:

For the Filesystem Namespace Operations, the ls(list files/folders) operation 
will also a common operation. But under current implementation, for example, 
list a directory, we have to traverse the whole directory/file table to lookup 
the child file/sub-folders.This is an ineffective way. Do we have any other 
improvement for this? Can we additionally store the child ID for each record in 
directory table? That can help us quickly find the child file or child folder.

{quote}
Associating a lock with each parent prefix being accessed by an operation in 
the OM, is sufficient to control concurrent operations on the same prefix. When 
the OM starts to process create “/a/b/c/1.txt”, a prefix lock is taken for 
“/a/b/c”...
{quote}

For the concurrency control, we create the lock for each parent prefix level. 
There will be large number of lock instances to be maintained in OM memory once 
there are millions of directory folders. Current way is so fine-grained lock 
way, have we considered about the partition namespace way? Divided the whole 
namespace into logic sub-namespaces by the prefix key. Then each sub-namespace 
will have its lock. This is a compromise approach than just having a global 
exclusive lock or having uncontrollable number of locks that depended on parent 
prefix's number.

Is there a future plan to have a way(API or command Tool) to convert object key 
to Ozone FS namespace? Because object store is now  the major use case for the 
users. Maybe users want to use a filesystem way to access the data without 
moving their data.



> Ozone FS namespace
> --
>
> Key: HDDS-2939
> URL: https://issues.apache.org/jira/browse/HDDS-2939
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: Ozone FS Namespace Proposal v1.0.docx
>
>
> Create the structures and metadata layout required to support efficient FS 
> namespace operations in Ozone - operations involving folders/directories 
> required to support the Hadoop compatible Filesystem interface.
> The details are described in the attached document. The work is divided up 
> into sub-tasks as per the task list in the document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2939) Ozone FS namespace

2020-01-29 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17025842#comment-17025842
 ] 

Yiqun Lin commented on HDDS-2939:
-

Hi [~sdeka], I am reading for this design doc, some comments from me:

For the Filesystem Namespace Operations, the ls(list files/folders) operation 
will also a common operation. But under current implementation, for example, 
list a directory, we have to traverse the whole directory/file table to lookup 
the child file/sub-folders.This is an ineffective way. Do we have any other 
improvement for this? Can we additionally store the child ID for each record in 
directory table? That can help us quickly find the child file or child folder.

{quote}
Associating a lock with each parent prefix being accessed by an operation in 
the OM, is sufficient to control concurrent operations on the same prefix. When 
the OM starts to process create “/a/b/c/1.txt”, a prefix lock is taken for 
“/a/b/c”...
{quote}

For the concurrency control, we create the lock for each parent prefix level. 
There will be large number of lock instances to be maintained in OM memory once 
there are millions of directory folders. Current way is so fine-grained lock 
way, have we considered about the partition namespace way? Divided the whole 
namespace into logic sub-namespaces by the prefix key. Then each sub-namespace 
will have its lock. This is a compromise approach than just having a global 
exclusive lock or having uncontrollable number of locks that depended on parent 
prefix's number.

Is there a future plan to have a way(API or command Tool) to convert object key 
to Ozone FS namespace? Because object store is now  the major use case for the 
users. Maybe users want to use a filesystem way to access the data without 
moving their data.



> Ozone FS namespace
> --
>
> Key: HDDS-2939
> URL: https://issues.apache.org/jira/browse/HDDS-2939
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Supratim Deka
>Priority: Major
> Attachments: Ozone FS Namespace Proposal v1.0.docx
>
>
> Create the structures and metadata layout required to support efficient FS 
> namespace operations in Ozone - operations involving folders/directories 
> required to support the Hadoop compatible Filesystem interface.
> The details are described in the attached document. The work is divided up 
> into sub-tasks as per the task list in the document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org