[GitHub] [hadoop-ozone] captainzmc commented on pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-08-11 Thread GitBox


captainzmc commented on pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#issuecomment-672572309


   > Thanks @captainzmc for working on this.
   > 
   > This review is mainly about API, and related suggestions are added inline.
   > 
   > Also, there is discussion on naming parameter of CLI in design-doc.
   
   Hi yisheng, I have replied in design-Doc. We've talked about this before. 
Finally, we refer to the [parameter of HDFS 
quota](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands)
 so as to maximize retention of user habits.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-08-11 Thread GitBox


captainzmc commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r469003376



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
##
@@ -101,10 +100,11 @@ void createVolume(String volumeName, VolumeArgs args)
   /**
* Set Volume Quota.
* @param volumeName Name of the Volume
-   * @param quota Quota to be set for the Volume
+   * @param quotaInBytes The maximum size this volume can be used.
+   * @param quotaInCounts The maximum number of buckets in this volume.
* @throws IOException
*/
-  void setVolumeQuota(String volumeName, OzoneQuota quota)
+  void setVolumeQuota(String volumeName, long quotaInBytes, long quotaInCounts)

Review comment:
   Thanks @cxorm  for the discussion, that's a good question.
   The previous feature of Ozone on Set quota is virtual, and this interface 
won't have any actual effect. This feature is incomplete, so no one will use it 
before.
   One way to think about it is that this is the new API, and subsequent 
futures will gradually add new interfaces including clrQuota, etc. Users who 
want to use full functionality of quota need to update the client.
   So I think we can think of this part as a new interface without regard to 
backward compatibility.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4106) Volume space: Supports clearing spaceQuota

2020-08-11 Thread mingchao zhao (Jira)
mingchao zhao created HDDS-4106:
---

 Summary: Volume space: Supports clearing spaceQuota
 Key: HDDS-4106
 URL: https://issues.apache.org/jira/browse/HDDS-4106
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: mingchao zhao


Volume space quota supports deleting spaceQuota.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4105) Volume space: update spaceQuota should check the current quotaUsageInBytes

2020-08-11 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao updated HDDS-4105:

Summary: Volume space: update spaceQuota should check the current 
quotaUsageInBytes   (was: Volume space: update spaceQuota should check the 
current quotaUsage )

> Volume space: update spaceQuota should check the current quotaUsageInBytes 
> ---
>
> Key: HDDS-4105
> URL: https://issues.apache.org/jira/browse/HDDS-4105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Priority: Major
>
> update spaceQuota should check the current quotaUsage:
> 1.If quotaUsage> new spaceQuota, updates are not allowed.
> 2.When updating spaceQuota, we need to update quotaUsage by getting the size 
> of all keys in the current volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4105) Volume space: update spaceQuota should check the current quotaUsage

2020-08-11 Thread mingchao zhao (Jira)
mingchao zhao created HDDS-4105:
---

 Summary: Volume space: update spaceQuota should check the current 
quotaUsage 
 Key: HDDS-4105
 URL: https://issues.apache.org/jira/browse/HDDS-4105
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: mingchao zhao


update spaceQuota should check the current quotaUsage:
1.If quotaUsage> new spaceQuota, updates are not allowed.
2.When updating spaceQuota, we need to update quotaUsage by getting the size of 
all keys in the current volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] timmylicheng commented on a change in pull request #1314: HDDS-3988: DN can distinguish SCMCommand from stale leader SCM

2020-08-11 Thread GitBox


timmylicheng commented on a change in pull request #1314:
URL: https://github.com/apache/hadoop-ozone/pull/1314#discussion_r468973335



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
##
@@ -470,6 +484,65 @@ public void execute(ExecutorService service, long time, 
TimeUnit unit)
 }
   }
 
+  /**
+   * After startup, datanode needs detect latest leader SCM before handling
+   * any SCMCommand, so that it won't be disturbed by stale leader SCM.
+   *
+   * The rule is: after majority SCMs are in HEARTBEAT state and has
+   * heard from leader SCMs (commandQueue is not empty), datanode will init
+   * termOfLeaderSCM with the max term found in commandQueue.
+   *
+   * The init process also works for non-HA mode. In that case, term of all
+   * SCMCommands will be 0.
+   */
+  void initTermOfLeaderSCM() {
+// only init once
+if (termOfLeaderSCM.isPresent()) {
+  return;
+}
+
+AtomicInteger scmNum = new AtomicInteger(0);
+AtomicInteger activeScmNum = new AtomicInteger(0);
+
+getParent().getConnectionManager().getValues()
+.forEach(endpoint -> {
+  if (endpoint.isPassive()) {
+return;
+  }
+  scmNum.incrementAndGet();
+  if (endpoint.getState()
+  == EndpointStateMachine.EndPointStates.HEARTBEAT) {
+activeScmNum.incrementAndGet();
+  }
+});
+
+// majority SCMs should be in HEARTBEAT state.
+if (activeScmNum.get() < scmNum.get() / 2 + 1) {
+  return;
+}
+
+// if commandQueue is not empty, init termOfLeaderSCM
+// with the largest term found in commandQueue
+commandQueue.stream()
+.mapToLong(SCMCommand::getTerm)
+.max()
+.ifPresent(term -> termOfLeaderSCM = Optional.of(term));
+  }
+
+  /**
+   * monotonically increase termOfLeaderSCM.
+   * Always record the latest term that has seen.
+   */
+  void updateTermOfLeaderSCM(SCMCommand command) {

Review comment:
   Can be a private method

##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/statemachine/StateContext.java
##
@@ -470,6 +484,65 @@ public void execute(ExecutorService service, long time, 
TimeUnit unit)
 }
   }
 
+  /**
+   * After startup, datanode needs detect latest leader SCM before handling
+   * any SCMCommand, so that it won't be disturbed by stale leader SCM.
+   *
+   * The rule is: after majority SCMs are in HEARTBEAT state and has
+   * heard from leader SCMs (commandQueue is not empty), datanode will init
+   * termOfLeaderSCM with the max term found in commandQueue.
+   *
+   * The init process also works for non-HA mode. In that case, term of all
+   * SCMCommands will be 0.
+   */
+  void initTermOfLeaderSCM() {

Review comment:
   Can be a private method
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4104) Provide a way to get the default value and key of java-based-configuration easily

2020-08-11 Thread maobaolong (Jira)
maobaolong created HDDS-4104:


 Summary: Provide a way to get the default value and key of 
java-based-configuration easily
 Key: HDDS-4104
 URL: https://issues.apache.org/jira/browse/HDDS-4104
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Affects Versions: 0.6.0
Reporter: maobaolong



- getDefaultValue
- getKeyName
- getValue



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on pull request #1305: HDDS-4009. Recon Overview page: The volume, bucket and key counts are not accurate

2020-08-11 Thread GitBox


vivekratnavel commented on pull request #1305:
URL: https://github.com/apache/hadoop-ozone/pull/1305#issuecomment-672329724


   @avijayanhwx Thanks for the review! I have updated the patch addressing all 
your comments. Please take another look. Thanks!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru merged pull request #1288: HDDS-4061. Pending delete blocks are not always included in #BLOCKCOUNT metadata

2020-08-11 Thread GitBox


hanishakoneru merged pull request #1288:
URL: https://github.com/apache/hadoop-ozone/pull/1288


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1286: HDDS-4040. [OFS] BasicRootedOzoneFileSystem to support batchDelete

2020-08-11 Thread GitBox


smengcl commented on a change in pull request #1286:
URL: https://github.com/apache/hadoop-ozone/pull/1286#discussion_r468877549



##
File path: 
hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
##
@@ -190,14 +190,19 @@ public InputStream readFile(String key) throws 
IOException {
 }
   }
 
+  @Deprecated
   protected void incrementCounter(Statistic objectsRead) {
 //noop: Use OzoneClientAdapterImpl which supports statistics.

Review comment:
   Done. Removed deprecated annotation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1286: HDDS-4040. [OFS] BasicRootedOzoneFileSystem to support batchDelete

2020-08-11 Thread GitBox


smengcl commented on a change in pull request #1286:
URL: https://github.com/apache/hadoop-ozone/pull/1286#discussion_r468876907



##
File path: 
hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
##
@@ -301,14 +302,19 @@ public InputStream readFile(String pathStr) throws 
IOException {
 }
   }
 
+  @Deprecated
   protected void incrementCounter(Statistic objectsRead) {
 //noop: Use OzoneClientAdapterImpl which supports statistics.

Review comment:
   I have removed the old method from all Impls. Only kept it in 
`BasicOzoneFileSystem`/`BasicRootedOzoneFileSystem`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #1286: HDDS-4040. [OFS] BasicRootedOzoneFileSystem to support batchDelete

2020-08-11 Thread GitBox


smengcl commented on a change in pull request #1286:
URL: https://github.com/apache/hadoop-ozone/pull/1286#discussion_r468875440



##
File path: 
hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java
##
@@ -220,17 +220,22 @@ protected InputStream createFSInputStream(InputStream 
inputStream) {
 return new OzoneFSInputStream(inputStream, statistics);
   }
 
+  @Deprecated
   protected void incrementCounter(Statistic statistic) {
 //don't do anyting in this default implementation.

Review comment:
   Updated in 5145690b3ab5404754554229dea0823dea32d202.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1304: HDDS-1745. Add integration test for createDirectory for OM HA

2020-08-11 Thread GitBox


amaliujia commented on a change in pull request #1304:
URL: https://github.com/apache/hadoop-ozone/pull/1304#discussion_r468862251



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
##
@@ -348,6 +362,57 @@ public void testJMXMetrics() throws Exception {
 Assert.assertTrue((long) flushCount >= 0);
   }
 
+  @Test
+  public void testOMCreateDirectory() throws Exception {
+ObjectStore objectStore = getCluster().getRpcClient().getObjectStore();
+String volumeName = "vol";
+String bucketName = "buk";
+String keyName = "test_dir";
+
+objectStore.createVolume(volumeName);
+objectStore.getVolume(volumeName).createBucket(bucketName);
+
+OMRequest request = OMRequest.newBuilder().setCreateDirectoryRequest(

Review comment:
   Ah interesting.. 
   
   If I run 
   ```
   bucket.createDirectory("/dir1");
   bucket.createDirectory("/dir1");
   ```
   The test will still pass without a complain. I expect an exception should be 
thrown. 
   
   I will check the internal of OM to see how does CreateDirectory API work, 
and see if there is anything can be improved.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1304: HDDS-1745. Add integration test for createDirectory for OM HA

2020-08-11 Thread GitBox


amaliujia commented on a change in pull request #1304:
URL: https://github.com/apache/hadoop-ozone/pull/1304#discussion_r468862251



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
##
@@ -348,6 +362,57 @@ public void testJMXMetrics() throws Exception {
 Assert.assertTrue((long) flushCount >= 0);
   }
 
+  @Test
+  public void testOMCreateDirectory() throws Exception {
+ObjectStore objectStore = getCluster().getRpcClient().getObjectStore();
+String volumeName = "vol";
+String bucketName = "buk";
+String keyName = "test_dir";
+
+objectStore.createVolume(volumeName);
+objectStore.getVolume(volumeName).createBucket(bucketName);
+
+OMRequest request = OMRequest.newBuilder().setCreateDirectoryRequest(

Review comment:
   Ah interesting.. 
   
   If I run 
   ```
   bucket.createDirectory("/dir1");
   bucket.createDirectory("/dir1");
   ```
   The test will still pass without a complain. I expect an exception should be 
thrown. 
   
   I will the internal of OM to see how does CreateDirectory API work, and see 
if there is anything can be improved.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1298: HDDS-3869. Use different column families for datanode block and metadata

2020-08-11 Thread GitBox


errose28 commented on a change in pull request #1298:
URL: https://github.com/apache/hadoop-ozone/pull/1298#discussion_r468835003



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/helpers/KeyValueContainerUtil.java
##
@@ -91,8 +83,16 @@ public static void createContainerMetaData(File 
containerMetaDataPath, File
   " Path: " + chunksPath);
 }
 
-MetadataStore store = MetadataStoreBuilder.newBuilder().setConf(conf)
-.setCreateIfMissing(true).setDbFile(dbFile).build();
+DatanodeStore store;
+if (schemaVersion.equals(OzoneConsts.SCHEMA_V1)) {
+  store = new DatanodeStoreSchemaOneImpl(conf, dbFile.getAbsolutePath());
+} else if (schemaVersion.equals(OzoneConsts.SCHEMA_V2)) {
+  store = new DatanodeStoreSchemaTwoImpl(conf, dbFile.getAbsolutePath());
+} else {
+  throw new IllegalArgumentException(
+  "Unrecognized schema version for container: " + schemaVersion);
+}

Review comment:
   Do we need to check schema version here, or will it always be the latest 
version?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3409) Update download links

2020-08-11 Thread Dharmendra Shavkani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dharmendra Shavkani reassigned HDDS-3409:
-

Assignee: Dharmendra Shavkani

> Update download links
> -
>
> Key: HDDS-3409
> URL: https://issues.apache.org/jira/browse/HDDS-3409
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: website
>Reporter: Arpit Agarwal
>Assignee: Dharmendra Shavkani
>Priority: Major
>  Labels: newbie
>
> The download links for signatures/checksums/KEYS should be updated from 
> dist.apache.org to https://downloads.apache.org/hadoop/ozone/.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1298: HDDS-3869. Use different column families for datanode block and metadata

2020-08-11 Thread GitBox


errose28 commented on a change in pull request #1298:
URL: https://github.com/apache/hadoop-ozone/pull/1298#discussion_r468830227



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConsts.java
##
@@ -140,7 +139,6 @@ public static Versioning getVersioning(boolean versioning) {
   }
 
   public static final String DELETING_KEY_PREFIX = "#deleting#";
-  public static final String DELETED_KEY_PREFIX = "#deleted#";
   public static final String DELETE_TRANSACTION_KEY_PREFIX = "#delTX#";
   public static final String BLOCK_COMMIT_SEQUENCE_ID_PREFIX = "#BCSID";

Review comment:
   Rename, because this is not actually a prefix, but a piece of metadata. 
Also make sure it is placed in the metadata table when used.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4098) Improve om admin getserviceroles error message

2020-08-11 Thread Dharmendra Shavkani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dharmendra Shavkani reassigned HDDS-4098:
-

Assignee: Dharmendra Shavkani

> Improve om admin getserviceroles error message
> --
>
> Key: HDDS-4098
> URL: https://issues.apache.org/jira/browse/HDDS-4098
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Attila Doroszlai
>Assignee: Dharmendra Shavkani
>Priority: Minor
>  Labels: newbie
>
> Steps to reproduce:
> # Start sample docker cluster
> # Run {{ozone admin om getserviceroles}} with unknown service ID
> {code:title=repro}
> $ cd hadoop-ozone/dist/target/ozone-*/compose/ozone
> $ docker-compose up -d
> $ docker-compose exec scm bash
> bash-4.2$ ozone admin om getserviceroles --service-id=om
> Error: This command works only on OzoneManager HA cluster. Service ID 
> specified does not match with ozone.om.service.ids defined in the 
> configuration. Configured ozone.om.service.ids are[]bash-4.2$
> {code}
> * The message should include a space before {{[]}}, and a newline at the end 
> (prompt should appear in next line).
> * Wording of the message could also be improved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4103) Update tools page with s3g description

2020-08-11 Thread Dharmendra Shavkani (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175812#comment-17175812
 ] 

Dharmendra Shavkani commented on HDDS-4103:
---

I just found that it was fixed in #HDDS-4042

> Update tools page with s3g description
> --
>
> Key: HDDS-4103
> URL: https://issues.apache.org/jira/browse/HDDS-4103
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Dharmendra Shavkani
>Assignee: Dharmendra Shavkani
>Priority: Trivial
>
> On this web page - 
> [https://hadoop.apache.org/ozone/docs/0.5.0-beta/tools.html] description for 
> s3g is missing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4103) Update tools page with s3g description

2020-08-11 Thread Dharmendra Shavkani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dharmendra Shavkani resolved HDDS-4103.
---
Resolution: Invalid

> Update tools page with s3g description
> --
>
> Key: HDDS-4103
> URL: https://issues.apache.org/jira/browse/HDDS-4103
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.5.0
>Reporter: Dharmendra Shavkani
>Assignee: Dharmendra Shavkani
>Priority: Trivial
>
> On this web page - 
> [https://hadoop.apache.org/ozone/docs/0.5.0-beta/tools.html] description for 
> s3g is missing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 commented on a change in pull request #1298: HDDS-3869. Use different column families for datanode block and metadata

2020-08-11 Thread GitBox


errose28 commented on a change in pull request #1298:
URL: https://github.com/apache/hadoop-ozone/pull/1298#discussion_r468818966



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/metadata/SchemaOneDeletedBlocksTable.java
##
@@ -0,0 +1,194 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.container.metadata;
+
+import org.apache.hadoop.hdds.utils.MetadataKeyFilters;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.hdds.utils.db.BatchOperation;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.TableIterator;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.container.common.helpers.ChunkInfoList;
+
+import java.io.IOException;
+import java.util.List;
+
+/**
+ * For RocksDB instances written using DB schema version 1, all data is
+ * stored in the default column family. This differs from later schema
+ * versions, which put deleted blocks in a different column family.
+ * As a result, the block IDs used as keys for deleted blocks must be
+ * prefixed in schema version 1 so that they can be differentiated from
+ * regular blocks. However, these prefixes are not necessary in later schema
+ * versions, because the deleted blocks and regular blocks are in different
+ * column families.
+ * 
+ * Since clients must operate independently of the underlying schema version,
+ * This class is returned to clients using {@link DatanodeStoreSchemaOneImpl}
+ * instances, allowing them to access keys as if no prefix is
+ * required, while it adds the prefix when necessary.
+ * This means the client should omit the deleted prefix when putting and
+ * getting keys, regardless of the schema version.
+ * 
+ * Note that this class will only apply prefixes to keys as parameters,
+ * never as return types. This means that keys returned through iterators
+ * like {@link SchemaOneDeletedBlocksTable#getSequentialRangeKVs},
+ * {@link SchemaOneDeletedBlocksTable#getRangeKVs}, and
+ * {@link SchemaOneDeletedBlocksTable#iterator} will return keys prefixed
+ * with {@link SchemaOneDeletedBlocksTable#DELETED_KEY_PREFIX}.
+ */
+public class SchemaOneDeletedBlocksTable implements Table {
+  public static final String DELETED_KEY_PREFIX = "#deleted#";
+
+  private final Table table;
+
+  public SchemaOneDeletedBlocksTable(Table table) {
+this.table = table;
+  }
+
+  @Override
+  public void put(String key, ChunkInfoList value) throws IOException {
+table.put(prefix(key), value);
+  }
+
+  @Override
+  public void putWithBatch(BatchOperation batch, String key,
+   ChunkInfoList value)
+  throws IOException {
+table.putWithBatch(batch, prefix(key), value);
+  }
+
+  @Override
+  public boolean isEmpty() throws IOException {
+return table.isEmpty();
+  }
+
+  @Override
+  public void delete(String key) throws IOException {
+table.delete(prefix(key));
+  }
+
+  @Override
+  public void deleteWithBatch(BatchOperation batch, String key)
+  throws IOException {
+table.deleteWithBatch(batch, prefix(key));
+  }
+
+  /**
+   * Because the actual underlying table in this schema version is the
+   * default table where all keys are stored, this method will iterate
+   * through all keys in the database.
+   */
+  @Override
+  public TableIterator>
+  iterator() {
+return table.iterator();
+  }

Review comment:
   Throw UnsupportedOperation for this implementation and schema 2 
implementation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4103) Update tools page with s3g description

2020-08-11 Thread Dharmendra Shavkani (Jira)
Dharmendra Shavkani created HDDS-4103:
-

 Summary: Update tools page with s3g description
 Key: HDDS-4103
 URL: https://issues.apache.org/jira/browse/HDDS-4103
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.5.0
Reporter: Dharmendra Shavkani
Assignee: Dharmendra Shavkani


On this web page - [https://hadoop.apache.org/ozone/docs/0.5.0-beta/tools.html] 
description for s3g is missing



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4102) Normalize Keypath for lookupKey

2020-08-11 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4102:
-
Status: Patch Available  (was: Open)

> Normalize Keypath for lookupKey
> ---
>
> Key: HDDS-4102
> URL: https://issues.apache.org/jira/browse/HDDS-4102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
> Keyname.
> Now when user tries to read the file from S3 using the keyName which user has 
> used to create the Key, it will return error KEY_NOT_FOUND
> The issue is, lookupKey need to normalize path, when 
> ozone.om.enable.filesystem.paths is enabled. This is common API used by 
> S3/FS. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4012) FLAKY-UT: TestWatchForCommit#test2WayCommitForTimeoutException

2020-08-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4012:
-
Labels: pull-request-available  (was: )

> FLAKY-UT: TestWatchForCommit#test2WayCommitForTimeoutException
> --
>
> Key: HDDS-4012
> URL: https://issues.apache.org/jira/browse/HDDS-4012
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Priority: Major
>  Labels: pull-request-available
>
> [INFO] Running org.apache.hadoop.ozone.client.rpc.TestWatchForCommit
> [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 191.617 s <<< FAILURE! - in 
> org.apache.hadoop.ozone.client.rpc.TestWatchForCommit
> [ERROR] 
> test2WayCommitForTimeoutException(org.apache.hadoop.ozone.client.rpc.TestWatchForCommit)
>   Time elapsed: 38.847 s  <<< ERROR!
> org.apache.ratis.protocol.GroupMismatchException: 
> bc6ce7e8-8a72-4287-9d17-f76681f43526: group-91575AE6096A not found.
>   at 
> org.apache.ratis.server.impl.RaftServerProxy$ImplMap.get(RaftServerProxy.java:127)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.getImplFuture(RaftServerProxy.java:274)
>   at 
> org.apache.ratis.server.impl.RaftServerProxy.getImpl(RaftServerProxy.java:283)
>   at 
> org.apache.hadoop.ozone.container.ContainerTestHelper.getRaftServerImpl(ContainerTestHelper.java:593)
>   at 
> org.apache.hadoop.ozone.container.ContainerTestHelper.isRatisFollower(ContainerTestHelper.java:608)
>   at 
> org.apache.hadoop.ozone.client.rpc.TestWatchForCommit.test2WayCommitForTimeoutException(TestWatchForCommit.java:302)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #1315: HDDS-4012. Normalize Keypath for lookupKey.

2020-08-11 Thread GitBox


bharatviswa504 opened a new pull request #1315:
URL: https://github.com/apache/hadoop-ozone/pull/1315


   ## What changes were proposed in this pull request?
   
   When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
Keyname.
   
   Now when user tries to read the file from S3 using the keyName which user 
has used to create the Key, it will return error KEY_NOT_FOUND
   
   The issue is, `lookupKey` also need to normalize path, when 
ozone.om.enable.filesystem.paths is enabled. This is a common API used by S3/FS.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4102
   
   ## How was this patch tested?
   
   Added a test.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4102) Normalize Keypath for lookupKey

2020-08-11 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-4102:
-
Parent: HDDS-4097
Issue Type: Sub-task  (was: Bug)

> Normalize Keypath for lookupKey
> ---
>
> Key: HDDS-4102
> URL: https://issues.apache.org/jira/browse/HDDS-4102
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
> Keyname.
> Now when user tries to read the file from S3 using the keyName which user has 
> used to create the Key, it will return error KEY_NOT_FOUND
> The issue is, lookupKey need to normalize path, when 
> ozone.om.enable.filesystem.paths is enabled. This is common API used by 
> S3/FS. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4102) Normalize Keypath for lookupKey

2020-08-11 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-4102:


 Summary: Normalize Keypath for lookupKey
 Key: HDDS-4102
 URL: https://issues.apache.org/jira/browse/HDDS-4102
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


When ozone.om.enable.filesystem.paths, OM normalizes path, and stores the 
Keyname.

Now when user tries to read the file from S3 using the keyName which user has 
used to create the Key, it will return error KEY_NOT_FOUND

The issue is, lookupKey need to normalize path, when 
ozone.om.enable.filesystem.paths is enabled. This is common API used by S3/FS. 




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1304: HDDS-1745. Add integration test for createDirectory for OM HA

2020-08-11 Thread GitBox


amaliujia commented on a change in pull request #1304:
URL: https://github.com/apache/hadoop-ozone/pull/1304#discussion_r468745598



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
##
@@ -348,6 +362,57 @@ public void testJMXMetrics() throws Exception {
 Assert.assertTrue((long) flushCount >= 0);
   }
 
+  @Test
+  public void testOMCreateDirectory() throws Exception {
+ObjectStore objectStore = getCluster().getRpcClient().getObjectStore();
+String volumeName = "vol";
+String bucketName = "buk";
+String keyName = "test_dir";
+
+objectStore.createVolume(volumeName);
+objectStore.getVolume(volumeName).createBucket(bucketName);
+
+OMRequest request = OMRequest.newBuilder().setCreateDirectoryRequest(

Review comment:
   @bharatviswa504 thank you! 
   
   I was confused on which `createDirectory` API it is. Can you paste a link 
here so I have better understanding on the context?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1304: HDDS-1745. Add integration test for createDirectory for OM HA

2020-08-11 Thread GitBox


bharatviswa504 commented on a change in pull request #1304:
URL: https://github.com/apache/hadoop-ozone/pull/1304#discussion_r468747830



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
##
@@ -348,6 +362,57 @@ public void testJMXMetrics() throws Exception {
 Assert.assertTrue((long) flushCount >= 0);
   }
 
+  @Test
+  public void testOMCreateDirectory() throws Exception {
+ObjectStore objectStore = getCluster().getRpcClient().getObjectStore();
+String volumeName = "vol";
+String bucketName = "buk";
+String keyName = "test_dir";
+
+objectStore.createVolume(volumeName);
+objectStore.getVolume(volumeName).createBucket(bucketName);
+
+OMRequest request = OMRequest.newBuilder().setCreateDirectoryRequest(

Review comment:
   
https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneBucket.java#L588
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on a change in pull request #1304: HDDS-1745. Add integration test for createDirectory for OM HA

2020-08-11 Thread GitBox


amaliujia commented on a change in pull request #1304:
URL: https://github.com/apache/hadoop-ozone/pull/1304#discussion_r468745598



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
##
@@ -348,6 +362,57 @@ public void testJMXMetrics() throws Exception {
 Assert.assertTrue((long) flushCount >= 0);
   }
 
+  @Test
+  public void testOMCreateDirectory() throws Exception {
+ObjectStore objectStore = getCluster().getRpcClient().getObjectStore();
+String volumeName = "vol";
+String bucketName = "buk";
+String keyName = "test_dir";
+
+objectStore.createVolume(volumeName);
+objectStore.getVolume(volumeName).createBucket(bucketName);
+
+OMRequest request = OMRequest.newBuilder().setCreateDirectoryRequest(

Review comment:
   @bharatviswa504 thank you! 
   
   I was confused on which `createDirectory` API is. Can you paste me a link 
here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-08-11 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r468441686



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -82,25 +135,28 @@ public OzoneQuota(long size, Units unit) {
* @return string representation of quota
*/
   public static String formatQuota(OzoneQuota quota) {
-return String.valueOf(quota.size) + quota.unit;
+return String.valueOf(quota.getRawSize())+ quota.getUnit();
   }
 
   /**
* Parses a user provided string and returns the
* Quota Object.
*
-   * @param quotaString Quota String
+   * @param quotaInBytesStr Volume quota in bytes String

Review comment:
   ```suggestion
  * @param quotaInBytes Volume quota in bytes
   ```

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -82,25 +135,28 @@ public OzoneQuota(long size, Units unit) {
* @return string representation of quota
*/
   public static String formatQuota(OzoneQuota quota) {
-return String.valueOf(quota.size) + quota.unit;
+return String.valueOf(quota.getRawSize())+ quota.getUnit();
   }
 
   /**
* Parses a user provided string and returns the
* Quota Object.
*
-   * @param quotaString Quota String
+   * @param quotaInBytesStr Volume quota in bytes String
+   * @param quotaInCounts Volume quota in counts
*
* @return OzoneQuota object
*/
-  public static OzoneQuota parseQuota(String quotaString) {
+  public static OzoneQuota parseQuota(String quotaInBytesStr,

Review comment:
   ```suggestion
 public static OzoneQuota parseQuota(String quotaInBytes,
   ```

##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/protocol/OzoneManagerProtocol.java
##
@@ -88,10 +88,12 @@
   /**
* Changes the Quota on a volume.
* @param volume - Name of the volume.
-   * @param quota - Quota in bytes.
+   * @param quotaInCounts - Volume quota in counts.
+   * @param quotaInBytes - Volume quota in bytes.
* @throws IOException
*/
-  void setQuota(String volume, long quota) throws IOException;
+  void setQuota(String volume, long quotaInCounts, long quotaInBytes)
+  throws IOException;

Review comment:
   The same as `ClientProtocol.java` part.
   For backward compatibility, I think we should add this method and also keep 
the original method.

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -328,12 +332,17 @@ public boolean setVolumeOwner(String volumeName, String 
owner)
   }
 
   @Override
-  public void setVolumeQuota(String volumeName, OzoneQuota quota)
-  throws IOException {
-verifyVolumeName(volumeName);
-Preconditions.checkNotNull(quota);
-long quotaInBytes = quota.sizeInBytes();
-ozoneManagerClient.setQuota(volumeName, quotaInBytes);
+  public void setVolumeQuota(String volumeName, long quotaInCounts,

Review comment:
   As mention in interface side.

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/client/OzoneQuota.java
##
@@ -112,6 +168,13 @@ public static OzoneQuota parseQuota(String quotaString) {
   found = true;
 }
 
+if (uppercase.endsWith(OZONE_QUOTA_KB)) {
+  size = uppercase
+  .substring(0, uppercase.length() - OZONE_QUOTA_KB.length());
+  currUnit = Units.KB;
+  found = true;
+}
+

Review comment:
   IMHO I think this part could be added before encountering quota size of 
MB.
   
   And in line 200 to 201, we should add `KB` in 
   ```
   throw new IllegalArgumentException("Quota unit not recognized. " +
   "Supported values are BYTES, MB, GB and TB.");
   ```

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/protocol/ClientProtocol.java
##
@@ -101,10 +100,11 @@ void createVolume(String volumeName, VolumeArgs args)
   /**
* Set Volume Quota.
* @param volumeName Name of the Volume
-   * @param quota Quota to be set for the Volume
+   * @param quotaInBytes The maximum size this volume can be used.
+   * @param quotaInCounts The maximum number of buckets in this volume.
* @throws IOException
*/
-  void setVolumeQuota(String volumeName, OzoneQuota quota)
+  void setVolumeQuota(String volumeName, long quotaInBytes, long quotaInCounts)

Review comment:
   For backward compatibility, I think we should add this method and also 
keep the original method.
   (User might implemented this interface and ran it on production.)
   
   @xiaoyuyao, would you please be so kind to provide some thoughts :smile: 

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/OzoneVolume.java
##
@@ -118,19 +124,19 @@ public OzoneVolume(ConfigurationSource conf, 
ClientProtocol proxy,
   @SuppressWarnings("parameternumber")
   public 

[jira] [Resolved] (HDDS-4067) Implement toString for OMTransactionInfo

2020-08-11 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-4067.
--
Fix Version/s: 0.7.0
   Resolution: Fixed

> Implement toString for OMTransactionInfo
> 
>
> Key: HDDS-4067
> URL: https://issues.apache.org/jira/browse/HDDS-4067
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Bharat Viswanadham
>Assignee: Rui Wang
>Priority: Major
>  Labels: newbie, pull-request-available
> Fix For: 0.7.0
>
>
> During debug
> We see the logs
> {code:java}
> 23:30:36,175 INFO org.apache.hadoop.ozone.om.OzoneManager: Installing 
> checkpoint with OMTransactionInfo 
> org.apache.hadoop.ozone.om.ratis.OMTransactionInfo@e19e
> {code}
> It would be helpful to print actual transaction info. For this toString need 
> to be implemented for OMTransactionInfo.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #1300: HDDS-4067. Implement toString for OMTransactionInfo

2020-08-11 Thread GitBox


bharatviswa504 merged pull request #1300:
URL: https://github.com/apache/hadoop-ozone/pull/1300


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1300: HDDS-4067. Implement toString for OMTransactionInfo

2020-08-11 Thread GitBox


bharatviswa504 commented on pull request #1300:
URL: https://github.com/apache/hadoop-ozone/pull/1300#issuecomment-672061049


   Thank You @amaliujia for the contribution.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1304: HDDS-1745. Add integration test for createDirectory for OM HA

2020-08-11 Thread GitBox


bharatviswa504 commented on a change in pull request #1304:
URL: https://github.com/apache/hadoop-ozone/pull/1304#discussion_r468689720



##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerHAMetadataOnly.java
##
@@ -348,6 +362,57 @@ public void testJMXMetrics() throws Exception {
 Assert.assertTrue((long) flushCount >= 0);
   }
 
+  @Test
+  public void testOMCreateDirectory() throws Exception {
+ObjectStore objectStore = getCluster().getRpcClient().getObjectStore();
+String volumeName = "vol";
+String bucketName = "buk";
+String keyName = "test_dir";
+
+objectStore.createVolume(volumeName);
+objectStore.getVolume(volumeName).createBucket(bucketName);
+
+OMRequest request = OMRequest.newBuilder().setCreateDirectoryRequest(

Review comment:
   We do not need this kind of tests, they are covered in UT.
   
   We can use `createDirectory `API and test create Directory functionality.
   
   Test cases can be: 
   1. single level path ('/dir)
   2. Multi-level ('/dir1/dir2')
   3. Few parents exist in multi-level path
   4. Already a directory exists with the same name
   
   any other cases you want to add :)
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3725) Ozone sh volume client support quota option.

2020-08-11 Thread YiSheng Lien (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3725?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175631#comment-17175631
 ] 

YiSheng Lien commented on HDDS-3725:


Hi [~micahzhao], thanks for working on this.


I update my thoughts about naming the parameters of CLI on design-doc. 

(I think design-doc is upload here,  so I comment on Jira instead of Github, 
feel free to share your thoughts :))

> Ozone sh volume client support quota option.
> 
>
> Key: HDDS-3725
> URL: https://issues.apache.org/jira/browse/HDDS-3725
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 49h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on a change in pull request #1305: HDDS-4009. Recon Overview page: The volume, bucket and key counts are not accurate

2020-08-11 Thread GitBox


avijayanhwx commented on a change in pull request #1305:
URL: https://github.com/apache/hadoop-ozone/pull/1305#discussion_r468639279



##
File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/TableCountTask.java
##
@@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.recon.tasks;
+
+import com.google.inject.Inject;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.utils.db.Table;
+import org.apache.hadoop.hdds.utils.db.TableIterator;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.recon.ReconUtils;
+import org.apache.hadoop.ozone.recon.recovery.ReconOMMetadataManager;
+import org.hadoop.ozone.recon.schema.tables.daos.GlobalStatsDao;
+import org.hadoop.ozone.recon.schema.tables.pojos.GlobalStats;
+import org.jooq.Configuration;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.Iterator;
+import java.util.Map.Entry;
+
+/**
+ * Class to iterate over the OM DB and store the total counts of volumes,
+ * buckets, keys, open keys, deleted keys, etc.
+ */
+public class TableCountTask implements ReconOmTask {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TableCountTask.class);
+
+  private GlobalStatsDao globalStatsDao;
+  private Configuration sqlConfiguration;
+  private HashMap objectCountMap;

Review comment:
   This can be a method local variable in process().

##
File path: 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/Table.java
##
@@ -82,6 +82,19 @@ void putWithBatch(BatchOperation batch, KEY key, VALUE value)
   VALUE get(KEY key) throws IOException;
 
 
+  /**
+   * Skip checking cache and get the value mapped to the given key in byte
+   * array or returns null if the key is not found.
+   *
+   * @param key metadata key
+   * @return value in byte array or null if the key is not found.
+   * @throws IOException on Failure
+   */
+  default VALUE getSkipCache(KEY key) throws IOException {
+throw new NotImplementedException("getSkipCache is not implemented");

Review comment:
   We should default to get() method or provide implementation in 
org.apache.hadoop.hdds.utils.db.RDBTable as well. 

##
File path: 
hadoop-hdds/framework/src/main/java/org/apache/hadoop/hdds/utils/db/DBDefinition.java
##
@@ -43,4 +47,15 @@
*/
   DBColumnFamilyDefinition[] getColumnFamilies();
 
+  default Optional getKeyType(String table) {

Review comment:
   Can we add Javadoc for the new methods?

##
File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/tasks/OMDBUpdatesHandler.java
##
@@ -262,12 +270,13 @@ public void markCommit(byte[] bytes) throws 
RocksDBException {
   }
 
   /**
-   * Return Key type class for a given table name.
-   * @param name table name.
-   * @return String.class by default.
+   * Return Key type class for the given table.
+   *
+   * @return keyType class.
*/
-  private Class getKeyType() {
-return String.class;
+  @VisibleForTesting
+  Optional getKeyType(String name) {

Review comment:
   Nit. We can get rid of these single line methods. 

##
File path: 
hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/schema/StatsSchemaDefinition.java
##
@@ -46,22 +48,31 @@
   @Override
   public void initializeSchema() throws SQLException {
 Connection conn = dataSource.getConnection();
+dslContext = DSL.using(conn);
 if (!TABLE_EXISTS_CHECK.test(conn, GLOBAL_STATS_TABLE_NAME)) {
-  createGlobalStatsTable(conn);
+  createGlobalStatsTable();
 }
   }
 
   /**
* Create the Ozone Global Stats table.
-   * @param conn connection
*/
-  private void createGlobalStatsTable(Connection conn) {
-DSL.using(conn).createTableIfNotExists(GLOBAL_STATS_TABLE_NAME)
+  private void createGlobalStatsTable() {
+dslContext.createTableIfNotExists(GLOBAL_STATS_TABLE_NAME)
 .column("key", 

[jira] [Updated] (HDDS-4056) Convert OzoneAdmin to pluggable model

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4056:
---
Description: Ozone Shell's {{OzoneAdmin}} implements {{WithScmClient}} 
interface to be able to provide SCM client to sub-commands.  We can convert it 
to a {{Mixin}}, which would allow converting {{OzoneAdmin}} to the pluggable 
model introduced by HDDS-4046.  (was: Ozone Shell's {{OzoneAdmin}} implements 
{{WithScmClient}} interface to be able to provide SCM client to sub-commands.  
We can convert it to a {{Mixin}}, which would allow converting {{OzoneAdmin}} 
to the pluggable model in HDDS-4046.)

> Convert OzoneAdmin to pluggable model
> -
>
> Key: HDDS-4056
> URL: https://issues.apache.org/jira/browse/HDDS-4056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Ozone Shell's {{OzoneAdmin}} implements {{WithScmClient}} interface to be 
> able to provide SCM client to sub-commands.  We can convert it to a 
> {{Mixin}}, which would allow converting {{OzoneAdmin}} to the pluggable model 
> introduced by HDDS-4046.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4056) Convert OzoneAdmin to pluggable model

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4056:
---
Summary: Convert OzoneAdmin to pluggable model  (was: Refactor 
WithScmClient to mixin)

> Convert OzoneAdmin to pluggable model
> -
>
> Key: HDDS-4056
> URL: https://issues.apache.org/jira/browse/HDDS-4056
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> Ozone Shell's {{OzoneAdmin}} implements {{WithScmClient}} interface to be 
> able to provide SCM client to sub-commands.  We can convert it to a 
> {{Mixin}}, which would allow converting {{OzoneAdmin}} to the pluggable model 
> in HDDS-4046.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1310: HDDS-4094. Support byte-level write in Freon HadoopFsGenerator

2020-08-11 Thread GitBox


adoroszlai commented on a change in pull request #1310:
URL: https://github.com/apache/hadoop-ozone/pull/1310#discussion_r46868



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/ContentGenerator.java
##
@@ -38,15 +39,23 @@
*/
   private int bufferSize;
 
+  /**
+   * Number of bytes to write in one call. Should be less than the bufferSize.

Review comment:
   ```suggestion
  * Number of bytes to write in one call. Should be no larger than the 
bufferSize.
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1310: HDDS-4094. Support byte-level write in Freon HadoopFsGenerator

2020-08-11 Thread GitBox


adoroszlai commented on a change in pull request #1310:
URL: https://github.com/apache/hadoop-ozone/pull/1310#discussion_r468584888



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/freon/ContentGenerator.java
##
@@ -56,7 +64,20 @@ public void write(OutputStream outputStream) throws 
IOException {
 for (long nrRemaining = keySize;
  nrRemaining > 0; nrRemaining -= bufferSize) {
   int curSize = (int) Math.min(bufferSize, nrRemaining);
-  outputStream.write(buffer, 0, curSize);
+  if (copyBufferSize == 1) {
+for (int i = 0; i < curSize; i++) {
+  outputStream.write(buffer[i]);
+}
+  } else {
+for (int i = 0; i < nrRemaining; i += copyBufferSize) {
+  outputStream.write(buffer, i,
+  Math.min(copyBufferSize, (int) (nrRemaining - i)));

Review comment:
   `outputStream.write(buffer, i,` results in `IndexOutOfBoundsException` 
if `bufferSize < keySize`.
   
   I think it should be:
   
   ```suggestion
   for (int i = 0; i < curSize; i += copyBufferSize) {
 outputStream.write(buffer, i,
 Math.min(copyBufferSize, curSize - i));
   ```
   
   Can you please add a test case in `TestContentGenerator` for this?

##
File path: 
hadoop-ozone/tools/src/test/java/org/apache/hadoop/ozone/freon/TestContentGenerator.java
##
@@ -0,0 +1,56 @@
+package org.apache.hadoop.ozone.freon;
+
+import java.io.ByteArrayOutputStream;
+import java.io.IOException;
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership.  The ASF
+ * licenses this file to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations 
under
+ * the License.
+ */
+
+import org.junit.Assert;
+import org.junit.Test;
+
+/**
+ * Tests for the ContentGenerator class of Freon.
+ */
+public class TestContentGenerator {
+
+  @Test
+  public void writeWrite() throws IOException {
+ContentGenerator generator = new ContentGenerator(1024, 1024);
+ByteArrayOutputStream output = new ByteArrayOutputStream();
+
+generator.write(output);
+Assert.assertArrayEquals(generator.getBuffer(), output.toByteArray());
+  }
+
+  @Test
+  public void writeWithByteLevelWrite() throws IOException {
+ContentGenerator generator = new ContentGenerator(1024, 1024, 1);
+ByteArrayOutputStream output = new ByteArrayOutputStream();
+
+generator.write(output);
+Assert.assertArrayEquals(generator.getBuffer(), output.toByteArray());
+  }
+
+  @Test
+  public void writeWithSmallBuffer() throws IOException {
+ContentGenerator generator = new ContentGenerator(1024, 1024, 10);
+ByteArrayOutputStream output = new ByteArrayOutputStream();
+
+generator.write(output);
+Assert.assertArrayEquals(generator.getBuffer(), output.toByteArray());
+  }
+}

Review comment:
   ```suggestion
   
 @Test
 public void writeWithDistinctSizes() throws IOException {
   ContentGenerator generator = new ContentGenerator(20, 8, 3);
   ByteArrayOutputStream output = new ByteArrayOutputStream();
   
   generator.write(output);
   
   byte[] expected = new byte[20];
   byte[] buffer = generator.getBuffer();
   System.arraycopy(buffer, 0, expected, 0, buffer.length);
   System.arraycopy(buffer, 0, expected, 8, buffer.length);
   System.arraycopy(buffer, 0, expected, 16, 4);
   Assert.assertArrayEquals(expected, output.toByteArray());
 }
   }
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc removed a comment on pull request #1296: HDDS-4053. Volume space: add quotaUsageInBytes and update it when write and delete key.

2020-08-11 Thread GitBox


captainzmc removed a comment on pull request #1296:
URL: https://github.com/apache/hadoop-ozone/pull/1296#issuecomment-671969051


   hi @bharatviswa504 @arp7 Could you help review the PR about volume quota?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1296: HDDS-4053. Volume space: add quotaUsageInBytes and update it when write and delete key.

2020-08-11 Thread GitBox


captainzmc commented on pull request #1296:
URL: https://github.com/apache/hadoop-ozone/pull/1296#issuecomment-671969348


   hi @bharatviswa504 @arp7 @xiaoyuyao Could you help review the PR about 
volume quota?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1296: HDDS-4053. Volume space: add quotaUsageInBytes and update it when write and delete key.

2020-08-11 Thread GitBox


captainzmc commented on pull request #1296:
URL: https://github.com/apache/hadoop-ozone/pull/1296#issuecomment-671969051


   hi @bharatviswa504 @arp7 Could you help review the PR about volume quota?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc removed a comment on pull request #1296: HDDS-4053. Volume space: add quotaUsageInBytes and update it when write and delete key.

2020-08-11 Thread GitBox


captainzmc removed a comment on pull request #1296:
URL: https://github.com/apache/hadoop-ozone/pull/1296#issuecomment-671967077


   hi @bharatviswa504 @xiaoyuyao Could you help review the PR about volume 
quota? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1296: HDDS-4053. Volume space: add quotaUsageInBytes and update it when write and delete key.

2020-08-11 Thread GitBox


captainzmc commented on pull request #1296:
URL: https://github.com/apache/hadoop-ozone/pull/1296#issuecomment-671967077


   hi @bharatviswa504 @xiaoyuyao Could you help review the PR about volume 
quota? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #1096: HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list

2020-08-11 Thread GitBox


maobaolong commented on pull request #1096:
URL: https://github.com/apache/hadoop-ozone/pull/1096#issuecomment-671959997


   @elek Thanks you very much for merge this PR, base this framework, we can 
create various pipeline choose policies.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-08-11 Thread GitBox


captainzmc commented on pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#issuecomment-671958313


   > Thanks @captainzmc for working on this patch.
   > 
   > This review is about test.
   > Some suggestions are added inline.
   
   Thanks for @cxorm’s  review. Fixed review issues.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1149: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config

2020-08-11 Thread GitBox


elek merged pull request #1149:
URL: https://github.com/apache/hadoop-ozone/pull/1149


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3878) Make OMHA serviceID optional if one (but only one) is defined in the config

2020-08-11 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3878.
---
Fix Version/s: 0.6.0
   Resolution: Fixed

> Make OMHA serviceID optional if one (but only one) is defined in the config 
> 
>
> Key: HDDS-3878
> URL: https://issues.apache.org/jira/browse/HDDS-3878
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> om.serviceId is required on case of OM.HA in all the client parameters even 
> if there is only one om.serviceId and it can be chosen.
> My goal is:
>  1. Provide better usability
>  2. Simplify the documentation task ;-)
> With using the om.serviceId from the config if 
>  1. config is available
>  2. om ha is configured 
>  3. only one service is configured
> It also makes easier to run the same tests with/without HA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1283: HDDS-4057. Failed acceptance test missing from bundle

2020-08-11 Thread GitBox


adoroszlai commented on pull request #1283:
URL: https://github.com/apache/hadoop-ozone/pull/1283#issuecomment-671940983


   > 1. `set +e` and `set -e`
   > 1. `failing1-2`
   
   I agree it would be nice to implement both: consistent `set -e` and test 
env. reorganization.  Created HDDS-4100 and HDDS-4101 for these.
   
   > Not clear why do we need the `set +e` here if we introduce `--nostatusrc`
   
   `test.sh` execution also needs to "ignore" errors, otherwise we may not 
reach `rebot`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4101) Consistently enable exit-on-error in test scripts

2020-08-11 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4101:
--

 Summary: Consistently enable exit-on-error in test scripts
 Key: HDDS-4101
 URL: https://issues.apache.org/jira/browse/HDDS-4101
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: test
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Enable "fail fast" ({{set -e}}, {{set -o pipefail}}, etc.) for all test and 
check scripts.  Make sure that post-processing and cleanup steps are performed 
anyway.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4100) Reorganize compose environments

2020-08-11 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4100:
--

 Summary: Reorganize compose environments
 Key: HDDS-4100
 URL: https://issues.apache.org/jira/browse/HDDS-4100
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: docker
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


bq. The original vision with compose was to provide easy to understood examples 
for the users for different use-cases. 
([source|https://github.com/apache/hadoop-ozone/pull/1283#pullrequestreview-465029071])

but currently some of the environments (eg. {{upgrade}}) are mostly for tests.  
Furthermore, HDDS-4057 includes new, purely test environments (to verify 
acceptance test failure handling).

It would be nice to reorganize environments into two separate categories: 
sample and test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3833) Use Pipeline choose policy to choose pipeline from exist pipeline list

2020-08-11 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3833.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Use Pipeline choose policy to choose pipeline from exist pipeline list
> --
>
> Key: HDDS-3833
> URL: https://issues.apache.org/jira/browse/HDDS-3833
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> With this policy driven mode, we can develop various pipeline choosing policy 
> to satisfy complex production environment.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1096: HDDS-3833. Use Pipeline choose policy to choose pipeline from exist pipeline list

2020-08-11 Thread GitBox


elek merged pull request #1096:
URL: https://github.com/apache/hadoop-ozone/pull/1096


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-08-11 Thread GitBox


captainzmc commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r468557818



##
File path: hadoop-ozone/dist/src/main/smoketest/topology/loaddata.robot
##
@@ -25,7 +25,7 @@ Test Timeout5 minutes
 
 *** Test Cases ***
 Create a volume, bucket and key
-${output} = Execute  ozone sh volume create topvol1 
--quota 100TB
+${output} = Execute  ozone sh volume create topvol1

Review comment:
   This script is also used in the 0.5->0.6 upgrade. This script was used 
in the 0.5 environment and the 0.5 version does not support -spaceQuota.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-08-11 Thread GitBox


captainzmc commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r468557818



##
File path: hadoop-ozone/dist/src/main/smoketest/topology/loaddata.robot
##
@@ -25,7 +25,7 @@ Test Timeout5 minutes
 
 *** Test Cases ***
 Create a volume, bucket and key
-${output} = Execute  ozone sh volume create topvol1 
--quota 100TB
+${output} = Execute  ozone sh volume create topvol1

Review comment:
   This script is also used in the 0.5->0.6 upgrade. This script was 
created in the 0.5 environment and the 0.5 version does not support spaceQuota.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1088: HDDS-3805. [OFS] Remove usage of OzoneClientAdapter interface

2020-08-11 Thread GitBox


elek commented on pull request #1088:
URL: https://github.com/apache/hadoop-ozone/pull/1088#issuecomment-671916540


   /pending Yeah I will spilt the change into more jiras.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1154: HDDS-3867. Extend the chunkinfo tool to display information from all nodes in the pipeline.

2020-08-11 Thread GitBox


elek commented on pull request #1154:
URL: https://github.com/apache/hadoop-ozone/pull/1154#issuecomment-671915950


   > @elek , I feel if we can go ahead with this change and then try to remove 
XceiverClient interface altogether in a separte jira. Your thoughts??
   
   We need some kind of interfaces if we would like to support different type 
of write path. This interface might not be so low-level as today, but something 
is required (maybe just a OutputStream factory?).
   
   But it doesn't answer my original concern about mixing client specific and 
admin specific methods in the same interface. Even with a dedicated interface, 
if it's part of the client API we need to make it backward compatible forever.
   
   I am not fully against it, but I think we need a strong argument to extend 
client interface. For example why don't we use any CLI tool on the datanode 
which can read the db and blocks data directly? In that case we don't need to 
add complexity for RPC interface at all?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4099) No Log4j 2 configuration file found error appears in CLI

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4099:
---
Description: 
The following message appears in CLI for several commands.
{code:java}
ERROR StatusLogger No Log4j 2 configuration file found. Using default 
configuration (logging only errors to the console), or user programmatically 
provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2
{code}
Sample commands from acceptance test logs:
 * {{ozone freon randomkeys --numOfVolumes 5 --numOfBuckets 5 --numOfKeys 5 
--numOfThreads 1 --replicationType RATIS --factor THREE --validateWrites 2}}
 * {{ozone sh key put 32139-target/link1/key1 /etc/passwd}}
 * {{ozone freon ockg -t=1 -n=1}}
 * {{ozone fs -ls ofs://om/fstest1/bucket1-ofs}}

  was:
The following message appears in CLI for several commands.

{code}
ERROR StatusLogger No Log4j 2 configuration file found. Using default 
configuration (logging only errors to the console), or user programmatically 
provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2
{code}

Sample commands from acceptance test logs:

* {{ozone freon randomkeys --numOfVolumes 5 --numOfBuckets 5 --numOfKeys 5 
--numOfThreads 1 --replicationType RATIS --factor THREE --validateWrites 2}}
* {{ozone sh key put 32139-target/link1/key1 /etc/passwd}}
* {{ozone freon ockg  -t=1 -n=1}}


> No Log4j 2 configuration file found error appears in CLI
> 
>
> Key: HDDS-4099
> URL: https://issues.apache.org/jira/browse/HDDS-4099
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone CLI
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>
> The following message appears in CLI for several commands.
> {code:java}
> ERROR StatusLogger No Log4j 2 configuration file found. Using default 
> configuration (logging only errors to the console), or user programmatically 
> provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
> internal initialization logging. See 
> https://logging.apache.org/log4j/2.x/manual/configuration.html for 
> instructions on how to configure Log4j 2
> {code}
> Sample commands from acceptance test logs:
>  * {{ozone freon randomkeys --numOfVolumes 5 --numOfBuckets 5 --numOfKeys 5 
> --numOfThreads 1 --replicationType RATIS --factor THREE --validateWrites 2}}
>  * {{ozone sh key put 32139-target/link1/key1 /etc/passwd}}
>  * {{ozone freon ockg -t=1 -n=1}}
>  * {{ozone fs -ls ofs://om/fstest1/bucket1-ofs}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1212: HDDS-3979. Make bufferSize configurable for stream copy

2020-08-11 Thread GitBox


elek merged pull request #1212:
URL: https://github.com/apache/hadoop-ozone/pull/1212


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3979) Clarify the ObjectEndpoint code of s3g

2020-08-11 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-3979.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Clarify the ObjectEndpoint code of s3g
> --
>
> Key: HDDS-3979
> URL: https://issues.apache.org/jira/browse/HDDS-3979
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4099) No Log4j 2 configuration file found error appears in CLI

2020-08-11 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4099:
--

 Summary: No Log4j 2 configuration file found error appears in CLI
 Key: HDDS-4099
 URL: https://issues.apache.org/jira/browse/HDDS-4099
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone CLI
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


The following message appears in CLI for several commands.

{code}
ERROR StatusLogger No Log4j 2 configuration file found. Using default 
configuration (logging only errors to the console), or user programmatically 
provided configurations. Set system property 'log4j2.debug' to show Log4j 2 
internal initialization logging. See 
https://logging.apache.org/log4j/2.x/manual/configuration.html for instructions 
on how to configure Log4j 2
{code}

Sample commands from acceptance test logs:

* {{ozone freon randomkeys --numOfVolumes 5 --numOfBuckets 5 --numOfKeys 5 
--numOfThreads 1 --replicationType RATIS --factor THREE --validateWrites 2}}
* {{ozone sh key put 32139-target/link1/key1 /etc/passwd}}
* {{ozone freon ockg  -t=1 -n=1}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1083: HDDS-3814. Drop a column family through debug cli tool

2020-08-11 Thread GitBox


elek commented on pull request #1083:
URL: https://github.com/apache/hadoop-ozone/pull/1083#issuecomment-671911173


   /pending Any opinion?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1173: HDDS-3880. Improve OM HA Robot test

2020-08-11 Thread GitBox


elek commented on pull request #1173:
URL: https://github.com/apache/hadoop-ozone/pull/1173#issuecomment-671910050


   /pending CI is failing



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4098) Improve om admin getserviceroles error message

2020-08-11 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-4098:
--

 Summary: Improve om admin getserviceroles error message
 Key: HDDS-4098
 URL: https://issues.apache.org/jira/browse/HDDS-4098
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone CLI
Reporter: Attila Doroszlai


Steps to reproduce:

# Start sample docker cluster
# Run {{ozone admin om getserviceroles}} with unknown service ID

{code:title=repro}
$ cd hadoop-ozone/dist/target/ozone-*/compose/ozone
$ docker-compose up -d
$ docker-compose exec scm bash
bash-4.2$ ozone admin om getserviceroles --service-id=om
Error: This command works only on OzoneManager HA cluster. Service ID specified 
does not match with ozone.om.service.ids defined in the configuration. 
Configured ozone.om.service.ids are[]bash-4.2$
{code}

* The message should include a space before {{[]}}, and a newline at the end 
(prompt should appear in next line).
* Wording of the message could also be improved.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3988) DN can distinguish SCMCommand from stale leader SCM

2020-08-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3988:
-
Labels: pull-request-available  (was: )

> DN can distinguish SCMCommand from stale leader SCM
> ---
>
> Key: HDDS-3988
> URL: https://issues.apache.org/jira/browse/HDDS-3988
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Major
>  Labels: pull-request-available
>
> As part of SCMCommand SCM will also send its current term, which will be used 
> in Datanode to identify if the command was sent by the latest leader SCM.
>  
> Datanode will maintain the highest term that it has seen and compare it with 
> the term that is received as part of SCMCommand.
>  * If the term in the Datanode and SCMCommand are same, the command is added 
> to the command queue for processing.
>  * If the term in the Datanode is less than the term received in SCMCommand, 
> Datanode will update its term and add the command to the command queue for 
> processing.
>  * If the term in the Datanode is greater than the term received in 
> SCMCommand, Datanode will ignore the command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng opened a new pull request #1314: HDDS-3988: DN can distinguish SCMCommand from stale leader SCM

2020-08-11 Thread GitBox


GlenGeng opened a new pull request #1314:
URL: https://github.com/apache/hadoop-ozone/pull/1314


   ## What changes were proposed in this pull request?
   
   As part of SCMCommand SCM will also send its current term, which will be 
used in Datanode to identify if the command was sent by the latest leader SCM.
    
   Datanode will maintain the highest term that it has seen and compare it with 
the term that is received as part of SCMCommand.
   If the term in the Datanode and SCMCommand are same, the command is added to 
the command queue for processing.
   If the term in the Datanode is less than the term received in SCMCommand, 
Datanode will update its term and add the command to the command queue for 
processing.
   If the term in the Datanode is greater than the term received in SCMCommand, 
Datanode will ignore the command.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3988
   
   ## How was this patch tested?
   
   CI



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3988) DN can distinguish SCMCommand from stale leader

2020-08-11 Thread Glen Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Geng updated HDDS-3988:

Summary: DN can distinguish SCMCommand from stale leader  (was: running DN 
can distinguish SCMCommand from stale leader)

> DN can distinguish SCMCommand from stale leader
> ---
>
> Key: HDDS-3988
> URL: https://issues.apache.org/jira/browse/HDDS-3988
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Major
>
> As part of SCMCommand SCM will also send its current term, which will be used 
> in Datanode to identify if the command was sent by the latest leader SCM.
>  
> Datanode will maintain the highest term that it has seen and compare it with 
> the term that is received as part of SCMCommand.
>  * If the term in the Datanode and SCMCommand are same, the command is added 
> to the command queue for processing.
>  * If the term in the Datanode is less than the term received in SCMCommand, 
> Datanode will update its term and add the command to the command queue for 
> processing.
>  * If the term in the Datanode is greater than the term received in 
> SCMCommand, Datanode will ignore the command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3988) DN can distinguish SCMCommand from stale leader SCM

2020-08-11 Thread Glen Geng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Glen Geng updated HDDS-3988:

Summary: DN can distinguish SCMCommand from stale leader SCM  (was: DN can 
distinguish SCMCommand from stale leader)

> DN can distinguish SCMCommand from stale leader SCM
> ---
>
> Key: HDDS-3988
> URL: https://issues.apache.org/jira/browse/HDDS-3988
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Glen Geng
>Assignee: Glen Geng
>Priority: Major
>
> As part of SCMCommand SCM will also send its current term, which will be used 
> in Datanode to identify if the command was sent by the latest leader SCM.
>  
> Datanode will maintain the highest term that it has seen and compare it with 
> the term that is received as part of SCMCommand.
>  * If the term in the Datanode and SCMCommand are same, the command is added 
> to the command queue for processing.
>  * If the term in the Datanode is less than the term received in SCMCommand, 
> Datanode will update its term and add the command to the command queue for 
> processing.
>  * If the term in the Datanode is greater than the term received in 
> SCMCommand, Datanode will ignore the command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4078) Use HDDS InterfaceAudience/Stability annotations

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4078:
---
Labels:   (was: pull-request-available)

> Use HDDS InterfaceAudience/Stability annotations
> 
>
> Key: HDDS-4078
> URL: https://issues.apache.org/jira/browse/HDDS-4078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
> Fix For: 0.7.0
>
>
> HDDS-3028 added Ozone-private versions of {{InterfaceAudience}} and 
> {{InterfaceStability}} annotations.  Some recent changes re-introduced usage 
> of their Hadoop Common versions.
> {code}
> hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/CleanupTableInfo.java
> 19:import org.apache.hadoop.classification.InterfaceStability;
> hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
> 28:import org.apache.hadoop.classification.InterfaceAudience;
> hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
> 21:import org.apache.hadoop.classification.InterfaceAudience;
> 22:import org.apache.hadoop.classification.InterfaceStability;
> hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
> 33:import org.apache.hadoop.classification.InterfaceAudience;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1302: HDDS-4078. Use HDDS InterfaceAudience/Stability annotations

2020-08-11 Thread GitBox


adoroszlai commented on pull request #1302:
URL: https://github.com/apache/hadoop-ozone/pull/1302#issuecomment-671865843


   Thanks @bshashikant for reviewing and committing it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4065) Regularly close and open new pipelines

2020-08-11 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175439#comment-17175439
 ] 

Stephen O'Donnell commented on HDDS-4065:
-

Attached a draft doc outlining some ideas and pros / cons of this idea.

> Regularly close and open new pipelines
> --
>
> Key: HDDS-4065
> URL: https://issues.apache.org/jira/browse/HDDS-4065
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: Regularly_Closing_Pipelines.001.pdf
>
>
> There are scenarios where non-rack-aware pipelines can be created on a 
> cluster and when that happens they should be closed and replaced with new 
> pipelines.
> There is also a desire to regularly close piplelines and open new ones, to 
> provide better shuffling of data across the nodes.
> This Jira will discuss ways to solve both of these problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4065) Regularly close and open new pipelines

2020-08-11 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDDS-4065:

Attachment: Regularly_Closing_Pipelines.001.pdf

> Regularly close and open new pipelines
> --
>
> Key: HDDS-4065
> URL: https://issues.apache.org/jira/browse/HDDS-4065
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: SCM
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: Regularly_Closing_Pipelines.001.pdf
>
>
> There are scenarios where non-rack-aware pipelines can be created on a 
> cluster and when that happens they should be closed and replaced with new 
> pipelines.
> There is also a desire to regularly close piplelines and open new ones, to 
> provide better shuffling of data across the nodes.
> This Jira will discuss ways to solve both of these problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant merged pull request #1278: HDDS-4048. Show more information while SCM version info mismatch

2020-08-11 Thread GitBox


bshashikant merged pull request #1278:
URL: https://github.com/apache/hadoop-ozone/pull/1278


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4048) Show more information while SCM version info mismatch

2020-08-11 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-4048.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Show more information while SCM version info mismatch
> -
>
> Key: HDDS-4048
> URL: https://issues.apache.org/jira/browse/HDDS-4048
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #1278: HDDS-4048. Show more information while SCM version info mismatch

2020-08-11 Thread GitBox


bshashikant commented on pull request #1278:
URL: https://github.com/apache/hadoop-ozone/pull/1278#issuecomment-671858567


   Thanks @maobaolong for the contribution.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4078) Use HDDS InterfaceAudience/Stability annotations

2020-08-11 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-4078:
--
Fix Version/s: 0.7.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Use HDDS InterfaceAudience/Stability annotations
> 
>
> Key: HDDS-4078
> URL: https://issues.apache.org/jira/browse/HDDS-4078
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> HDDS-3028 added Ozone-private versions of {{InterfaceAudience}} and 
> {{InterfaceStability}} annotations.  Some recent changes re-introduced usage 
> of their Hadoop Common versions.
> {code}
> hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/response/CleanupTableInfo.java
> 19:import org.apache.hadoop.classification.InterfaceStability;
> hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneClientAdapterImpl.java
> 28:import org.apache.hadoop.classification.InterfaceAudience;
> hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneFileSystem.java
> 21:import org.apache.hadoop.classification.InterfaceAudience;
> 22:import org.apache.hadoop.classification.InterfaceStability;
> hadoop-ozone/ozonefs-common/src/main/java/org/apache/hadoop/fs/ozone/BasicRootedOzoneClientAdapterImpl.java
> 33:import org.apache.hadoop.classification.InterfaceAudience;
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #1302: HDDS-4078. Use HDDS InterfaceAudience/Stability annotations

2020-08-11 Thread GitBox


bshashikant commented on pull request #1302:
URL: https://github.com/apache/hadoop-ozone/pull/1302#issuecomment-671856750


   Thanks @adoroszlai for the contribution.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant merged pull request #1302: HDDS-4078. Use HDDS InterfaceAudience/Stability annotations

2020-08-11 Thread GitBox


bshashikant merged pull request #1302:
URL: https://github.com/apache/hadoop-ozone/pull/1302


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on pull request #1266: HDDS-4034. Add Unit Test for HadoopNestedDirGenerator.

2020-08-11 Thread GitBox


bshashikant commented on pull request #1266:
URL: https://github.com/apache/hadoop-ozone/pull/1266#issuecomment-671850092


   Thanks @aryangupta1998 for the contribution.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4034) Add Unit Test for HadoopNestedDirGenerator

2020-08-11 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee resolved HDDS-4034.
---
Fix Version/s: 0.7.0
   Resolution: Fixed

> Add Unit Test for HadoopNestedDirGenerator
> --
>
> Key: HDDS-4034
> URL: https://issues.apache.org/jira/browse/HDDS-4034
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Aryan Gupta
>Assignee: Aryan Gupta
>Priority: Major
>  Labels: https://github.com/apache/hadoop-ozone/pull/1266, 
> pull-request-available
> Fix For: 0.7.0
>
>
> Unit test - It checks the span and depth of nested directories created by the 
> HadoopNestedDirGenerator Tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant merged pull request #1266: HDDS-4034. Add Unit Test for HadoopNestedDirGenerator.

2020-08-11 Thread GitBox


bshashikant merged pull request #1266:
URL: https://github.com/apache/hadoop-ozone/pull/1266


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2981) Add unit tests for Proto [de]serialization

2020-08-11 Thread Peter Orova (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17175410#comment-17175410
 ] 

Peter Orova commented on HDDS-2981:
---

[~hanishakoneru] To clarify, this jira is about creating unit tests for 
{{org.apache.hadoop.ozone.om.helpers.OmPrefixInfo}}, specifically the 
{{getProtobuf()}} and {{getFromProtobuf()}} methods, is that correct?

> Add unit tests for Proto [de]serialization
> --
>
> Key: HDDS-2981
> URL: https://issues.apache.org/jira/browse/HDDS-2981
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>  Components: test
>Reporter: Hanisha Koneru
>Assignee: Peter Orova
>Priority: Major
>  Labels: TriagePending, newbie
>
> Every proto must have tests for checking serialization and deserialization. 
> Some of the protos are missing these tests. For example - 
> OzoneManagerProto#PrefixInfo.
> There might be more protos which are missing these tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4092) Writing delta to Ozone hangs when creating the _delta_log json

2020-08-11 Thread Dustin Smith (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dustin Smith updated HDDS-4092:
---
Description: 
I am testing writing delta, OSS not databricks, data to Ozone FS since my 
company is looking to replace Hadoop if feasible. However, whenever I write 
delta table, the parquet files are writing, the delta log directory is created, 
but the json is never writing. 

I am using the spark operator to submit a batch test job to write about 5mb of 
data.

Neither on the driver nor on the executor is there an error. The driver never 
finishes since the creation of the json hangs.

 

Code I used for testing spark operator and then I ran the pieces in the shell 
for testing. In the save path, update bucket and volume info for your data 
store.
{code:java}
package app.OzoneTest

import org.apache.spark.sql.{DataFrame, SparkSession}
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types.{BinaryType, StringType}

object CreateData {

  def main(args: Array[String]): Unit = {

val spark: SparkSession = SparkSession
  .builder()
  .appName(s"Create Ozone Mock Data")
  .enableHiveSupport()
  .getOrCreate()

import spark.implicits._

val df: DataFrame = Seq.fill(10)
{(randomID, randomLat, randomLong, randomDates, randomHour)}
  .toDF("msisdn", "latitude", "longitude", "par_day", "par_hour")
  .withColumn("msisdn", $"msisdn".cast(StringType))
  .withColumn("msisdn", sha1($"msisdn".cast(BinaryType)))
  .select("msisdn", "latitude", "longitude", "par_day", "par_hour")

df
  .repartition(3, $"msisdn")
  .sortWithinPartitions("latitude", "longitude")
  .write
  .partitionBy("par_day", "par_hour")
  .format("delta")
  .save("o3fs://your_bucker.your_volume/location_data")

  }

  def randomID: Int = scala.util.Random.nextInt(10) + 1

  def randomDates: Int = 20200101 + scala.util.Random.nextInt((20200131 - 
20200101) + 1)

  def randomHour: Int = scala.util.Random.nextInt(24)

  def randomLat: Double = 13.5 + scala.util.Random.nextFloat()

  def randomLong: Double = 100 + scala.util.Random.nextFloat()
}
{code}

  was:
I am testing writing delta, OSS not databricks, data to Ozone FS since my 
company is looking to replace Hadoop if feasible. However, whenever I write 
delta table, the parquet files are writing, the delta log directory is created, 
but the json is never writing. 

I am using the spark operator to submit a batch test job to write about 5mb of 
data.

Neither on the driver nor on the executor is there an error. The driver never 
finishes since the creation of the json hangs.


> Writing delta to Ozone hangs when creating the _delta_log json
> --
>
> Key: HDDS-4092
> URL: https://issues.apache.org/jira/browse/HDDS-4092
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.5.0
> Environment: We are using Kubernetes k8s, Ozone 0.5.0beta, Spark 
> 3.0.0, Hadoop 3.2, Scala 2.12.10, and io.delta:delta-core_2.12:0.7.0
>Reporter: Dustin Smith
>Priority: Major
>  Labels: delta, filesystem, scala, spark
>
> I am testing writing delta, OSS not databricks, data to Ozone FS since my 
> company is looking to replace Hadoop if feasible. However, whenever I write 
> delta table, the parquet files are writing, the delta log directory is 
> created, but the json is never writing. 
> I am using the spark operator to submit a batch test job to write about 5mb 
> of data.
> Neither on the driver nor on the executor is there an error. The driver never 
> finishes since the creation of the json hangs.
>  
> Code I used for testing spark operator and then I ran the pieces in the shell 
> for testing. In the save path, update bucket and volume info for your data 
> store.
> {code:java}
> package app.OzoneTest
> import org.apache.spark.sql.{DataFrame, SparkSession}
> import org.apache.spark.sql.functions._
> import org.apache.spark.sql.types.{BinaryType, StringType}
> object CreateData {
>   def main(args: Array[String]): Unit = {
> val spark: SparkSession = SparkSession
>   .builder()
>   .appName(s"Create Ozone Mock Data")
>   .enableHiveSupport()
>   .getOrCreate()
> import spark.implicits._
> val df: DataFrame = Seq.fill(10)
> {(randomID, randomLat, randomLong, randomDates, randomHour)}
>   .toDF("msisdn", "latitude", "longitude", "par_day", "par_hour")
>   .withColumn("msisdn", $"msisdn".cast(StringType))
>   .withColumn("msisdn", sha1($"msisdn".cast(BinaryType)))
>   .select("msisdn", "latitude", "longitude", "par_day", "par_hour")
> df
>   .repartition(3, $"msisdn")
>   .sortWithinPartitions("latitude", "longitude")
>   .write
>   

[GitHub] [hadoop-ozone] cxorm commented on a change in pull request #1233: HDDS-3725. Ozone sh volume client support quota option.

2020-08-11 Thread GitBox


cxorm commented on a change in pull request #1233:
URL: https://github.com/apache/hadoop-ozone/pull/1233#discussion_r468426519



##
File path: 
hadoop-hdds/framework/src/test/java/org/apache/hadoop/hdds/server/TestJsonUtils.java
##
@@ -32,11 +32,11 @@
 
   @Test
   public void printObjectAsJson() throws IOException {
-OzoneQuota quota = new OzoneQuota(123, OzoneQuota.Units.MB);
+OzoneQuota quota = OzoneQuota.parseQuota("123MB", 1000L);
 
 String result = JsonUtils.toJsonStringWithDefaultPrettyPrinter(quota);
 

Review comment:
   I think we should verify `quotaInCounts` here, too.

##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
##
@@ -271,10 +271,10 @@ public void testSetVolumeQuota()
   throws IOException {
 String volumeName = UUID.randomUUID().toString();
 store.createVolume(volumeName);
-store.getVolume(volumeName).setQuota(
-OzoneQuota.parseQuota("1 BYTES"));
+store.getVolume(volumeName).setQuota(OzoneQuota.parseQuota("1GB", 0L));
 OzoneVolume volume = store.getVolume(volumeName);
-Assert.assertEquals(1L, volume.getQuota());
+Assert.assertEquals(1024 * 1024 * 1024,
+volume.getQuotaInBytes());

Review comment:
   The same as above.
   We should verify two quota attributes if the `OzoneQuota` has two attributes.

##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/fs/ozone/TestRootedOzoneFileSystem.java
##
@@ -734,7 +734,8 @@ public void testTempMount() throws Exception {
 // Sanity check
 Assert.assertNull(volumeArgs.getOwner());
 Assert.assertNull(volumeArgs.getAdmin());
-Assert.assertNull(volumeArgs.getQuota());
+Assert.assertNull(volumeArgs.getQuotaInBytes());
+Assert.assertEquals(0, volumeArgs.getQuotaInCounts());

Review comment:
   I think we should verify two quota attributes if the `OzoneQuota` has 
two attributes.

##
File path: hadoop-ozone/dist/src/main/smoketest/topology/loaddata.robot
##
@@ -25,7 +25,7 @@ Test Timeout5 minutes
 
 *** Test Cases ***
 Create a volume, bucket and key
-${output} = Execute  ozone sh volume create topvol1 
--quota 100TB
+${output} = Execute  ozone sh volume create topvol1

Review comment:
   Why could we not set `spaceQuota` here ?
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1295: HDDS-4037. Incorrect container numberOfKeys and usedBytes in SCM after key deletion

2020-08-11 Thread GitBox


adoroszlai commented on a change in pull request #1295:
URL: https://github.com/apache/hadoop-ozone/pull/1295#discussion_r468409907



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -766,6 +773,32 @@ private void handleUnstableContainer(final ContainerInfo 
container,
 
   }
 
+  /**
+   * Check and update Container key count and used bytes based on it's 
replica's
+   * data.
+   */
+  private void checkAndUpdateContainerState(final ContainerInfo container,

Review comment:
   I think something along the lines of `updateContainerStatsFromReplicas` 
would be more descriptive, especially since container state (closed, etc.) is 
not being changed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4076) Translate CSI.md into Chinese

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4076.

Fix Version/s: 0.7.0
   Resolution: Done

> Translate CSI.md into Chinese
> -
>
> Key: HDDS-4076
> URL: https://issues.apache.org/jira/browse/HDDS-4076
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4076) Translate CSI.md into Chinese

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4076:
---
Labels:   (was: pull-request-available)

> Translate CSI.md into Chinese
> -
>
> Key: HDDS-4076
> URL: https://issues.apache.org/jira/browse/HDDS-4076
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
> Fix For: 0.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1299: HDDS-4076. Translate CSI.md into Chinese

2020-08-11 Thread GitBox


adoroszlai commented on pull request #1299:
URL: https://github.com/apache/hadoop-ozone/pull/1299#issuecomment-671802573


   Thanks @maobaolong for the translation, and @cxorm and @runitao for the 
review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1299: HDDS-4076. Translate CSI.md into Chinese

2020-08-11 Thread GitBox


adoroszlai merged pull request #1299:
URL: https://github.com/apache/hadoop-ozone/pull/1299


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #1299: HDDS-4076. Translate CSI.md into Chinese

2020-08-11 Thread GitBox


maobaolong commented on pull request #1299:
URL: https://github.com/apache/hadoop-ozone/pull/1299#issuecomment-671798045


   @cxorm Something like this
   - 
https://docs.alluxio.io/os/user/stable/en/contributor/Documentation-Conventions.html
   - https://github.com/Alluxio/alluxio/wiki/Translation-Terms



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1149: HDDS-3878. Make OMHA serviceID optional if one (but only one) is defined in the config

2020-08-11 Thread GitBox


elek commented on pull request #1149:
URL: https://github.com/apache/hadoop-ozone/pull/1149#issuecomment-671775358


   Summary: got +1 from @bharatviswa504 and @arp7 also confirmed offline, that 
he is fine with it (if o3fs/ofs are not changed).
   
   Hanisha's suggestion also applied (and I pinged Her offline).
   
   I will commit this soon.
   
   Thanks the (very) long conversation and patient for everybody (@xiaoyuyao, 
@adoroszlai ...)
   
   It seems to be a long and hard issue, but I am happy with it. As remote work 
become the norm, I think we should move more and more formal and informal 
conversations to the pull request threads (or mailing list threads). In this 
case it took time, but I am glad that we found a consensus. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4046) Extensible subcommands for CLI applications

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4046:
---
Labels:   (was: pull-request-available)

> Extensible subcommands for CLI applications
> ---
>
> Key: HDDS-4046
> URL: https://issues.apache.org/jira/browse/HDDS-4046
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Fix For: 0.7.0
>
>
> HDDS-3814 proposed a new subcommand which deletes the column families from 
> the rocksdb tables.
> But during the discussion there was no consensus if it's safe to add or not.
> This patch makes the sub-commands easy to extend. Anybody can extend the main 
> ozone shell commands with new sub-commands. 
> Sub-commands can be added to the classpath and activated by Service Provider 
> Interface (meta-inf/services/...).
> This is an optional feature, current approach works forward (annotation based 
> sub-command definition). 
> And it's not only about HDDS-3814.
> It makes it easier to organize the sub-commands. (For example RDBStore 
> related commands can be moved to the rdb classes, they will be picked up).
> It makes possible to provide additional, ad-hoc tools during debug or support 
> which can help to solve/debug specific problems. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4046) Extensible subcommands for CLI applications

2020-08-11 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4046.

Fix Version/s: 0.7.0
   Resolution: Implemented

> Extensible subcommands for CLI applications
> ---
>
> Key: HDDS-4046
> URL: https://issues.apache.org/jira/browse/HDDS-4046
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.7.0
>
>
> HDDS-3814 proposed a new subcommand which deletes the column families from 
> the rocksdb tables.
> But during the discussion there was no consensus if it's safe to add or not.
> This patch makes the sub-commands easy to extend. Anybody can extend the main 
> ozone shell commands with new sub-commands. 
> Sub-commands can be added to the classpath and activated by Service Provider 
> Interface (meta-inf/services/...).
> This is an optional feature, current approach works forward (annotation based 
> sub-command definition). 
> And it's not only about HDDS-3814.
> It makes it easier to organize the sub-commands. (For example RDBStore 
> related commands can be moved to the rdb classes, they will be picked up).
> It makes possible to provide additional, ad-hoc tools during debug or support 
> which can help to solve/debug specific problems. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1276: HDDS-4046. Extensible subcommands for CLI applications

2020-08-11 Thread GitBox


adoroszlai commented on pull request #1276:
URL: https://github.com/apache/hadoop-ozone/pull/1276#issuecomment-671747480


   Thanks @elek for updating the patch.  Tested that `ozone admin` lists `om` 
as subcommand and `ozone admin om` works.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1276: HDDS-4046. Extensible subcommands for CLI applications

2020-08-11 Thread GitBox


adoroszlai merged pull request #1276:
URL: https://github.com/apache/hadoop-ozone/pull/1276


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org