[jira] [Assigned] (HDDS-3412) Ozone shell should have commands to delete non-empty volumes / buckets recursively

2020-04-16 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh reassigned HDDS-3412:
---

Assignee: Sadanand Shenoy

> Ozone shell should have commands to delete non-empty volumes / buckets 
> recursively 
> ---
>
> Key: HDDS-3412
> URL: https://issues.apache.org/jira/browse/HDDS-3412
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone CLI
>Affects Versions: 0.4.0
>Reporter: Gaurav Sharma
>Assignee: Sadanand Shenoy
>Priority: Major
>
> Currently we cannot delete non-empty volumes or buckets or even multiple keys 
> recursively from ozone shell. A small utility or additional commands in ozone 
> shell / CLI would be great help.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3412) Ozone shell should have commands to delete non-empty volumes / buckets recursively

2020-04-16 Thread Gaurav (Jira)
Gaurav created HDDS-3412:


 Summary: Ozone shell should have commands to delete non-empty 
volumes / buckets recursively 
 Key: HDDS-3412
 URL: https://issues.apache.org/jira/browse/HDDS-3412
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone CLI
Affects Versions: 0.4.0
Reporter: Gaurav


Currently we cannot delete non-empty volumes or buckets or even multiple keys 
recursively from ozone shell. A small utility or additional commands in ozone 
shell / CLI would be great help.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3322) StandAlone Pipelines are created in an infinite loop

2020-04-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-3322.
--
Fix Version/s: 0.6.0
   Resolution: Fixed

> StandAlone Pipelines are created in an infinite loop
> 
>
> Key: HDDS-3322
> URL: https://issues.apache.org/jira/browse/HDDS-3322
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> _BackgroundPipelineCreator_ keeps creating pipelines of configured 
> Replication type and all available Replication factors until some exception 
> occurs while creating the pipeline such as no more available nodes.
> When Replication Type is set to STAND_ALONE, we do not check if a DN has 
> already been used to create a pipeline of same factor or not and keep reusing 
> the same DNs to create new pipelines. This causes the pipeline creation to 
> happen in an infinite loop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #749: HDDS-3322. StandAlone Pipelines are created in an infinite loop

2020-04-16 Thread GitBox
bharatviswa504 commented on issue #749: HDDS-3322. StandAlone Pipelines are 
created in an infinite loop
URL: https://github.com/apache/hadoop-ozone/pull/749#issuecomment-615032101
 
 
   Thank You @hanishakoneru for the contribution and @vivekratnavel for the 
review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #749: HDDS-3322. StandAlone Pipelines are created in an infinite loop

2020-04-16 Thread GitBox
bharatviswa504 merged pull request #749: HDDS-3322. StandAlone Pipelines are 
created in an infinite loop
URL: https://github.com/apache/hadoop-ozone/pull/749
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL DB to Derby.

2020-04-16 Thread GitBox
swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL 
DB to Derby.
URL: https://github.com/apache/hadoop-ozone/pull/839#discussion_r409983094
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/DefaultDataSourceProvider.java
 ##
 @@ -43,14 +52,26 @@
*/
   @Override
   public DataSource get() {
 
 Review comment:
   This is actually an antipattern :-) Although I am guilty of this as well. 
Instead of the if we should have a DerbyDataSourceProvider, 
SqliteDataSourceProvider and Default one. Can be left as a TODO for later as 
well, upto you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL DB to Derby.

2020-04-16 Thread GitBox
swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL 
DB to Derby.
URL: https://github.com/apache/hadoop-ozone/pull/839#discussion_r409983094
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/DefaultDataSourceProvider.java
 ##
 @@ -43,14 +52,26 @@
*/
   @Override
   public DataSource get() {
 
 Review comment:
   This is actually an antipattern :-) Although I am guilty of this as well. 
Instead of the if were should have a DerbyDataSourceProvider, 
SqliteDataSourceProvider and Default one. Can be left as a TODO for later as 
well, upto you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL DB to Derby.

2020-04-16 Thread GitBox
swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL 
DB to Derby.
URL: https://github.com/apache/hadoop-ozone/pull/839#discussion_r409983094
 
 

 ##
 File path: 
hadoop-ozone/recon/src/main/java/org/apache/hadoop/ozone/recon/persistence/DefaultDataSourceProvider.java
 ##
 @@ -43,14 +52,26 @@
*/
   @Override
   public DataSource get() {
 
 Review comment:
   This is actually an antipattern :-) Although I am guilty of this as well. 
Instead of the if were should have a DerbyDataSourceProvider, 
SqliteDataSourceProvider and Default one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL DB to Derby.

2020-04-16 Thread GitBox
swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL 
DB to Derby.
URL: https://github.com/apache/hadoop-ozone/pull/839#discussion_r409982061
 
 

 ##
 File path: 
hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/codegen/JooqCodeGenerator.java
 ##
 @@ -55,10 +60,11 @@
   private static final Logger LOG =
   LoggerFactory.getLogger(JooqCodeGenerator.class);
 
-  private static final String SQLITE_DB =
-  System.getProperty("java.io.tmpdir") + "/recon-generated-schema";
-  private static final String JDBC_URL = "jdbc:sqlite:" + SQLITE_DB;
-
+  private static final String DB = Paths.get(
+  System.getProperty("java.io.tmpdir"),
+  "recon-generated-schema-" + Time.monotonicNow()).toString();
+  public static final String RECON_SCHEMA_NAME = "RECON";
 
 Review comment:
   Generally, schemas are lowercase.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL DB to Derby.

2020-04-16 Thread GitBox
swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL 
DB to Derby.
URL: https://github.com/apache/hadoop-ozone/pull/839#discussion_r409982227
 
 

 ##
 File path: 
hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/codegen/JooqCodeGenerator.java
 ##
 @@ -55,10 +60,11 @@
   private static final Logger LOG =
   LoggerFactory.getLogger(JooqCodeGenerator.class);
 
-  private static final String SQLITE_DB =
-  System.getProperty("java.io.tmpdir") + "/recon-generated-schema";
-  private static final String JDBC_URL = "jdbc:sqlite:" + SQLITE_DB;
-
+  private static final String DB = Paths.get(
+  System.getProperty("java.io.tmpdir"),
+  "recon-generated-schema-" + Time.monotonicNow()).toString();
 
 Review comment:
   Any reason to use a timestamp-based path?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL DB to Derby.

2020-04-16 Thread GitBox
swagle commented on a change in pull request #839: HDDS-3411. Switch Recon SQL 
DB to Derby.
URL: https://github.com/apache/hadoop-ozone/pull/839#discussion_r409981449
 
 

 ##
 File path: 
hadoop-ozone/recon-codegen/src/main/java/org/hadoop/ozone/recon/codegen/ReconSqlDbConfig.java
 ##
 @@ -0,0 +1,50 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.hadoop.ozone.recon.codegen;
+
+import org.apache.hadoop.hdds.conf.Config;
+import org.apache.hadoop.hdds.conf.ConfigGroup;
+import org.apache.hadoop.hdds.conf.ConfigTag;
+import org.apache.hadoop.hdds.conf.ConfigType;
+
+/**
+ * The configuration class for the Recon SQL DB.
+ */
+@ConfigGroup(prefix = "ozone.recon.sql.db")
+public class ReconSqlDbConfig {
 
 Review comment:
   Why does this class only provide 1 configuration item? The JooqPersistence 
module properties should not be moved here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is too slow by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description: 
*What's the problem ?*
Read a 300M file, it cost about 25 seconds, i.e. 12M/s, which is too slow. Then 
I capture the packet. You can see from the image, read a 300M file need 10 GET 
requests, each GET request read about 32M. 
The first GET request cost about 1 second, but the 10th GET request cost about 
23 seconds.
 !screenshot-1.png! 

*What's the reason ?*
When do GET, the stack is: 
[IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
 -> 
[IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
 -> 
[IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
 -> 
[InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].

It means, the 10th GET request which should read 270M-300M, but to skip 0-270M, 
it also 
[InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957]
 0-270M. So the GET  request become slower and slower

  was:
*What's the problem ?*
Read a 300M file, it cost about 25 seconds, i.e. 12.8M/s, which is too slow. 
Then I capture the packet. You can see from the image, read a 300M file need 10 
GET requests, each GET request read about 32M. 
The first GET request cost about 1 second, but the 10th GET request cost about 
23 seconds.
 !screenshot-1.png! 

*What's the reason ?*
When do GET, the stack is: 
[IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
 -> 
[IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
 -> 
[IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
 -> 
[InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].

It means, the 10th GET request which should read 270M-300M, but to skip 0-270M, 
it also 
[InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957]
 0-270M. So the GET  request become slower and slower


> Read a big object is too slow by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> *What's the problem ?*
> Read a 300M file, it cost about 25 seconds, i.e. 12M/s, which is too slow. 
> Then I capture the packet. You can see from the image, read a 300M file need 
> 10 GET requests, each GET request read about 32M. 
> The first GET request cost about 1 second, but the 10th GET request cost 
> about 23 seconds.
>  !screenshot-1.png! 
> *What's the reason ?*
> When do GET, the stack is: 
> [IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
>  -> 
> [IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
>  -> 
> [IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
>  -> 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].
> It means, the 10th GET request which should read 270M-300M, but to skip 
> 0-270M, it also 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957]
>  0-270M. So the GET  request become slower and slower



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085414#comment-17085414
 ] 

runzhiwang commented on HDDS-3223:
--

I‘m working on it

> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> *What's the problem ?*
> Read a 300M file, it cost about 25 seconds, i.e. 12.8M/s, which is too slow. 
> Then I capture the packet. You can see from the image, read a 300M file need 
> 10 GET requests, each GET request read about 32M. 
> The first GET request cost about 1 second, but the 10th GET request cost 
> about 23 seconds.
>  !screenshot-1.png! 
> *What's the reason ?*
> When do GET, the stack is: 
> [IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
>  -> 
> [IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
>  -> 
> [IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
>  -> 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].
> It means, the 10th GET request which should read 270M-300M, but to skip 
> 0-270M, it also 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957]
>  0-270M. So the GET  request become slower and slower



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is too slow by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Summary: Read a big object is too slow by s3g  (was: Read a big object is 
slower than write it by s3g)

> Read a big object is too slow by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> *What's the problem ?*
> Read a 300M file, it cost about 25 seconds, i.e. 12.8M/s, which is too slow. 
> Then I capture the packet. You can see from the image, read a 300M file need 
> 10 GET requests, each GET request read about 32M. 
> The first GET request cost about 1 second, but the 10th GET request cost 
> about 23 seconds.
>  !screenshot-1.png! 
> *What's the reason ?*
> When do GET, the stack is: 
> [IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
>  -> 
> [IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
>  -> 
> [IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
>  -> 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].
> It means, the 10th GET request which should read 270M-300M, but to skip 
> 0-270M, it also 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957]
>  0-270M. So the GET  request become slower and slower



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description: 
*What's the problem ?*
Read a 300M file, it cost about 25 seconds, i.e. 12.8M/s, which is too slow. 
Then I capture the packet. You can see from the image, read a 300M file need 10 
GET requests, each GET request read about 32M. 
The first GET request cost about 1 second, but the 10th GET request cost about 
23 seconds.
 !screenshot-1.png! 

*What's the reason ?*
When do GET, the stack is: 
[IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
 -> 
[IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
 -> 
[IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
 -> 
[InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].

It means, the 10th GET request which should read 270M-300M, but to skip 0-270M, 
it also 
[InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957]
 0-270M. So the GET  request become slower and slower

  was:
*What's the problem ?*
Read a 320M file, it cost about 25 seconds, i.e. 12.8M/s, which is too slow. 
Then I capture the packet. You can see from the image, read a 320M file need 10 
GET requests, each GET request read about 32M. 
The first GET request cost about 1 second, but the 10th GET request cost about 
23 seconds.
 !screenshot-1.png! 

*What's the reason ?*
When do GET, the stack is: 
[IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
 -> 
[IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
 -> 
[IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
 -> 
[InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].


> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
> *What's the problem ?*
> Read a 300M file, it cost about 25 seconds, i.e. 12.8M/s, which is too slow. 
> Then I capture the packet. You can see from the image, read a 300M file need 
> 10 GET requests, each GET request read about 32M. 
> The first GET request cost about 1 second, but the 10th GET request cost 
> about 23 seconds.
>  !screenshot-1.png! 
> *What's the reason ?*
> When do GET, the stack is: 
> [IOUtils::copyLarge|https://github.com/apache/hadoop-ozone/blob/master/hadoop-ozone/s3gateway/src/main/java/org/apache/hadoop/ozone/s3/endpoint/ObjectEndpoint.java#L262]
>  -> 
> [IOUtils::skipFully|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1190]
>  -> 
> [IOUtils::skip|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L2064]
>  -> 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957].
> It means, the 10th GET request which should read 270M-300M, but to skip 
> 0-270M, it also 
> [InputStream::read|https://github.com/apache/commons-io/blob/master/src/main/java/org/apache/commons/io/IOUtils.java#L1957]
>  0-270M. So the GET  request become slower and slower



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-16 Thread GitBox
xiaoyuyao commented on a change in pull request #696: HDDS-3056. Allow users to 
list volumes they have access to, and optionally allow all users to list all 
volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#discussion_r409977210
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -453,23 +449,11 @@ public boolean checkVolumeAccess(String volume, 
OzoneAclInfo userAcl)
   @Override
   public List listVolumes(String userName,
   String prefix, String startKey, int maxKeys) throws IOException {
-metadataManager.getLock().acquireLock(USER_LOCK, userName);
+metadataManager.getLock().acquireWriteLock(USER_LOCK, userName);
 
 Review comment:
   I think we should change it to acquireReadLock. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Attachment: screenshot-1.png

> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Reopened] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang reopened HDDS-3223:
--

> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description:  !screenshot-1.png! 

> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: screenshot-1.png
>
>
>  !screenshot-1.png! 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Description: (was: By s3gateway, write a 187MB file cost 5 seconds, but 
read it cost 17 seconds. Both write and read will split 187MB file into 24 
parts, so write/read has 24 POST/GET requests, I find s3g process the first 10 
GET requests in parallel and process the next 14 GET requests in sequential. I 
use {code:java}tcpdump -i eth0 -s 0 -A 'tcp dst port 9878 and tcp[((tcp[12:1] & 
0xf0) >> 2):4] = 0x47455420'  -w read.cap{code} to capture the GET request to 
s3gateway , as the first image shows. The first 10 GET requests range from 3.54 
second to 3.56 second. But the next 14 GET requests range from 4.41 second to 
12.23 second.  I also capture the PUT request to s3gateway, as the second image 
shows, the 24 PUT requests range from 0.63 second to 3.48 second, that's the 
reason why write is faster than read. I think the reason is in aws-cli. I will 
continue to find it out.
 !screenshot-3.png!
 !screenshot-5.png! )

> Read a big object cost 2 times more than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Attachment: (was: screenshot-5.png)

> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDDS-3223) Read a big object cost 2 times more than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Comment: was deleted

(was: I think it's related to our own operating system.)

> Read a big object cost 2 times more than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Summary: Read a big object is slower than write it by s3g  (was: Read a big 
object cost 2 times more than write it by s3g)

> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3223) Read a big object is slower than write it by s3g

2020-04-16 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-3223:
-
Attachment: (was: screenshot-3.png)

> Read a big object is slower than write it by s3g
> 
>
> Key: HDDS-3223
> URL: https://issues.apache.org/jira/browse/HDDS-3223
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3411) Switch Recon SQL DB to Derby.

2020-04-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3411:
-
Labels: pull-request-available  (was: )

> Switch Recon SQL DB to Derby.
> -
>
> Key: HDDS-3411
> URL: https://issues.apache.org/jira/browse/HDDS-3411
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: Ozone Recon
>Affects Versions: 0.6.0
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> Recon currently uses Sqlite as its defacto SQL DB with an option to configure 
> other JDBC compatible databases. However, on some platforms like the IBM 
> power pc, this causes problems from compile time since it does not have the 
> sqlite native driver. This task aims to change the default SQL DB used by 
> Recon to Derby, but retains the out of the box support (no need to supply the 
> driver) for Sqlite as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx opened a new pull request #839: HDDS-3411. Switch Recon SQL DB to Derby.

2020-04-16 Thread GitBox
avijayanhwx opened a new pull request #839: HDDS-3411. Switch Recon SQL DB to 
Derby.
URL: https://github.com/apache/hadoop-ozone/pull/839
 
 
   ## What changes were proposed in this pull request?
   
   Recon currently uses Sqlite as its defacto SQL DB with an option to 
configure other JDBC compatible databases. However, on some platforms like the 
IBM power pc, this causes problems from compile time since it does not have the 
sqlite native driver. This task aims to change the default SQL DB used by Recon 
to Derby, but retains the out of the box support (no need to supply the driver) 
for Sqlite as well.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-3411
   
   ## How was this patch tested?
   Added unit tests.
   Manually tested on docker with Derby and Sqlite.
   Built the recon modules on IBM PPC host.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3411) Switch Recon SQL DB to Derby.

2020-04-16 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-3411:
---

 Summary: Switch Recon SQL DB to Derby.
 Key: HDDS-3411
 URL: https://issues.apache.org/jira/browse/HDDS-3411
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: Ozone Recon
Affects Versions: 0.6.0
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.6.0


Recon currently uses Sqlite as its defacto SQL DB with an option to configure 
other JDBC compatible databases. However, on some platforms like the IBM power 
pc, this causes problems from compile time since it does not have the sqlite 
native driver. This task aims to change the default SQL DB used by Recon to 
Derby, but retains the out of the box support (no need to supply the driver) 
for Sqlite as well. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on issue #825: HDDS-3395. Move the protobuf convert code to the OMHelper

2020-04-16 Thread GitBox
maobaolong commented on issue #825: HDDS-3395. Move the protobuf convert code 
to the OMHelper
URL: https://github.com/apache/hadoop-ozone/pull/825#issuecomment-615009528
 
 
   @bharatviswa504 Thank you for your review and your reply. There are the 
reasons why i did this moving.
   
   - If I don't build my repo locally, it means that the local source repo lack 
of the java source files generated by protobuf, for the current approach, many 
error appear in the classes contains reference of protobuf generated class 
while you view source in an IDE, whatever intellij idea and Eclipse. 
   - Put the serialized and deserialized logic into the high level class, i 
think it is harmful for reading high level code logic, actually, some people do 
not care about how to serialized but care more about the data structure. Now 
each class have two method for serialized and deserialized purpose, and it is 
red color by IDE before i successful compile my repo.
   - I think we can separate the class into two layer, and the serialized and 
deserialized logic should be in the low-level layer.
   - In fact, i'm effected by HDFS and Alluxio, so, i'm very happy to discuss 
with you further more.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3406) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3406:
-
Labels: pull-request-available  (was: )

> Remove RetryInvocation INFO logging from ozone CLI output
> -
>
> Key: HDDS-3406
> URL: https://issues.apache.org/jira/browse/HDDS-3406
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om, Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
>
> In OM HA failover proxy provider, RetryInvocationHandler logs error stack 
> trace when client tries contacting non-leader OM. Instead we can just log a 
> message that the failover will happen and not include the stack trace.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3. at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
>  at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
>  at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while 
> invoking $Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 
> failover attempts. Trying to failover immediately.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru opened a new pull request #838: HDDS-3406. Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread GitBox
hanishakoneru opened a new pull request #838: HDDS-3406. Remove RetryInvocation 
INFO logging from ozone CLI output
URL: https://github.com/apache/hadoop-ozone/pull/838
 
 
   ## What changes were proposed in this pull request?
   
   In OM HA failover proxy provider, RetryInvocationHandler logs error stack 
trace when client tries contacting non-leader OM. Instead we can just log a 
message that the failover will happen and not include the stack trace.
   
   
`org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om3. at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while invoking 
$Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 failover 
attempts. Trying to failover immediately.`
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3406
   
   ## How was this patch tested?
   
   Manually tested in a docker cluster.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-16 Thread GitBox
smengcl commented on a change in pull request #696: HDDS-3056. Allow users to 
list volumes they have access to, and optionally allow all users to list all 
volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#discussion_r409905641
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -453,23 +449,11 @@ public boolean checkVolumeAccess(String volume, 
OzoneAclInfo userAcl)
   @Override
   public List listVolumes(String userName,
   String prefix, String startKey, int maxKeys) throws IOException {
-metadataManager.getLock().acquireLock(USER_LOCK, userName);
+metadataManager.getLock().acquireWriteLock(USER_LOCK, userName);
 
 Review comment:
   I could either leave it alone, or change it to `acquireReadLock`. The latter 
should be fine though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-16 Thread GitBox
smengcl commented on a change in pull request #696: HDDS-3056. Allow users to 
list volumes they have access to, and optionally allow all users to list all 
volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#discussion_r409897810
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -453,23 +449,11 @@ public boolean checkVolumeAccess(String volume, 
OzoneAclInfo userAcl)
   @Override
   public List listVolumes(String userName,
   String prefix, String startKey, int maxKeys) throws IOException {
-metadataManager.getLock().acquireLock(USER_LOCK, userName);
+metadataManager.getLock().acquireWriteLock(USER_LOCK, userName);
 
 Review comment:
   This is merely cleaning up the deprecated call.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on a change in pull request #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-16 Thread GitBox
smengcl commented on a change in pull request #696: HDDS-3056. Allow users to 
list volumes they have access to, and optionally allow all users to list all 
volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#discussion_r409897865
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerListVolumes.java
 ##
 @@ -0,0 +1,238 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_RATIS_PIPELINE_LIMIT;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_VOLUME_LISTALL_ALLOWED;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+/**
+ * Test OzoneManager list volume operation under combinations of configs.
+ */
+public class TestOzoneManagerListVolumes {
+
+  @Rule
+  public Timeout timeout = new Timeout(120_000);
+
+  private UserGroupInformation loginUser;
+  private UserGroupInformation user1 =
+  UserGroupInformation.createRemoteUser("user1");  // Admin user
+  private UserGroupInformation user2 =
+  UserGroupInformation.createRemoteUser("user2");  // Non-admin user
+
+  @Before
+  public void init() throws Exception {
+loginUser = UserGroupInformation.getLoginUser();
+  }
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   */
+  private MiniOzoneCluster startCluster(boolean aclEnabled,
+  boolean volListAllAllowed) throws Exception {
+
+OzoneConfiguration conf = new OzoneConfiguration();
+String clusterId = UUID.randomUUID().toString();
+String scmId = UUID.randomUUID().toString();
+String omId = UUID.randomUUID().toString();
+conf.setInt(OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS, 2);
+conf.set(OZONE_ADMINISTRATORS, "user1");
+conf.setInt(OZONE_SCM_RATIS_PIPELINE_LIMIT, 10);
+
+// Use native impl here, default impl doesn't do actual checks
+conf.set(OZONE_ACL_AUTHORIZER_CLASS, OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+// Note: OM doesn't support live config reloading
+conf.setBoolean(OZONE_ACL_ENABLED, aclEnabled);
+conf.setBoolean(OZONE_OM_VOLUME_LISTALL_ALLOWED, volListAllAllowed);
+
+MiniOzoneCluster cluster = MiniOzoneCluster.newBuilder(conf)
+.setClusterId(clusterId).setScmId(scmId).setOmId(omId).build();
+cluster.waitForClusterToBeReady();
+
+// loginUser is the user running this test.
+// Implication: loginUser is automatically added to the OM admin list.
+UserGroupInformation.setLoginUser(loginUser);
+// Create volumes with non-default owners and ACLs
+OzoneClient client = cluster.getClient();
+ObjectStore objectStore = client.getObjectStore();
+
+/* r = READ, w = WRITE, c = CREATE, d = DELETE
+   l = LIST, a = ALL, n = NONE, x = READ_ACL, y = WRITE_ACL */
+String aclUser1All = "user:user1:a";
+  

[GitHub] [hadoop-ozone] bharatviswa504 edited a comment on issue #825: HDDS-3395. Move the protobuf convert code to the OMHelper

2020-04-16 Thread GitBox
bharatviswa504 edited a comment on issue #825: HDDS-3395. Move the protobuf 
convert code to the OMHelper
URL: https://github.com/apache/hadoop-ozone/pull/825#issuecomment-614930795
 
 
   I believe instead of moving proto conversions into OMPBHelper, I believe the 
current approach of each Class having the logic of conversion to/from protobuf 
looks cleaner. With the PR approach, there will be one big huge class that does 
all this conversion. I feel the current way looks easy, wherein a single place 
has an entire code for it.
   
   Thoughts?? If there is any strong reason in doing this way, I am happy to 
hear the reasons.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #825: HDDS-3395. Move the protobuf convert code to the OMHelper

2020-04-16 Thread GitBox
bharatviswa504 commented on issue #825: HDDS-3395. Move the protobuf convert 
code to the OMHelper
URL: https://github.com/apache/hadoop-ozone/pull/825#issuecomment-614930795
 
 
   I believe instead of moving this in to OMPBHelper, I believe current 
approach of Each Class having the logic of conversion to/from protobuf looks 
cleaner. With the PR approach, there will be one big huge class that does all 
this conversion. I feel the current way looks easy, wherein a single place has 
an entire code for it.
   
   Thoughts?? If there is any strong reason in doing this way, I am happy to 
hear the reasons.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #833: HDDS-3314. scmcli container info command failing intermittently

2020-04-16 Thread GitBox
bharatviswa504 commented on a change in pull request #833: HDDS-3314. scmcli 
container info command failing intermittently
URL: https://github.com/apache/hadoop-ozone/pull/833#discussion_r409885190
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ContainerOperationClient.java
 ##
 @@ -382,7 +382,7 @@ public ContainerDataProto readContainer(long containerID,
   Pipeline pipeline) throws IOException {
 XceiverClientSpi client = null;
 try {
-  client = xceiverClientManager.acquireClient(pipeline);
+  client = xceiverClientManager.acquireClientForReadData(pipeline);
 
 Review comment:
   I have not really understood this change, as we just changed from 
acquireClient  -> acquireClientForReadData.
   How that will handle failures, and how it will retry on other datanodes.
   
   And also InfoSubCommand Uses below code, and it is not calling readContainer.
 final ContainerWithPipeline container = scmClient.
 getContainerWithPipeline(containerID);
   
   >   final ContainerWithPipeline container = scmClient.
   >   getContainerWithPipeline(containerID);
   >   Preconditions.checkNotNull(container, "Container cannot be null");
   
   If possible, can you explain this change, not sure if I am missing something 
here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #833: HDDS-3314. scmcli container info command failing intermittently

2020-04-16 Thread GitBox
bharatviswa504 commented on a change in pull request #833: HDDS-3314. scmcli 
container info command failing intermittently
URL: https://github.com/apache/hadoop-ozone/pull/833#discussion_r409885190
 
 

 ##
 File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/ContainerOperationClient.java
 ##
 @@ -382,7 +382,7 @@ public ContainerDataProto readContainer(long containerID,
   Pipeline pipeline) throws IOException {
 XceiverClientSpi client = null;
 try {
-  client = xceiverClientManager.acquireClient(pipeline);
+  client = xceiverClientManager.acquireClientForReadData(pipeline);
 
 Review comment:
   I have not really understood this change, as we just changed from 
acquireClient  -> acquireClientForReadData.
   How that will handle failures, and how it will retry on other datanodes.
   
   And also InfoSubCommand Uses below code, and it is not calling readContainer.
 final ContainerWithPipeline container = scmClient.
 getContainerWithPipeline(containerID);
   
   >   final ContainerWithPipeline container = scmClient.
   >   getContainerWithPipeline(containerID);
   >   Preconditions.checkNotNull(container, "Container cannot be null");


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3409) Update download links

2020-04-16 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HDDS-3409:

Description: The download links for signatures/checksums/KEYS should be 
updated from dist.apache.org to https://downloads.apache.org/hadoop/ozone/.  
(was: The download lists for signatures/checksums/KEYS should be updated from 
dist.apache.org to https://downloads.apache.org/hadoop/ozone/.)

> Update download links
> -
>
> Key: HDDS-3409
> URL: https://issues.apache.org/jira/browse/HDDS-3409
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: website
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> The download links for signatures/checksums/KEYS should be updated from 
> dist.apache.org to https://downloads.apache.org/hadoop/ozone/.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3410) Update download links

2020-04-16 Thread Arpit Agarwal (Jira)
Arpit Agarwal created HDDS-3410:
---

 Summary: Update download links
 Key: HDDS-3410
 URL: https://issues.apache.org/jira/browse/HDDS-3410
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: website
Reporter: Arpit Agarwal


The download lists for signatures/checksums/KEYS should be updated from 
dist.apache.org to https://downloads.apache.org/hadoop/ozone/.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-3409) Update download links

2020-04-16 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HADOOP-16992 to HDDS-3409:
--

Component/s: (was: website)
 website
Key: HDDS-3409  (was: HADOOP-16992)
   Workflow: patch-available, re-open possible  (was: no-reopen-closed, 
patch-avail)
Project: Hadoop Distributed Data Store  (was: Hadoop Common)

> Update download links
> -
>
> Key: HDDS-3409
> URL: https://issues.apache.org/jira/browse/HDDS-3409
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: website
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: newbie
>
> The download lists for signatures/checksums/KEYS should be updated from 
> dist.apache.org to https://downloads.apache.org/hadoop/ozone/.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3401) Ozone audit entries could be consistent among volume creation with quota and update quota

2020-04-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-3401.
--
Fix Version/s: 0.6.0
   Resolution: Fixed

> Ozone audit entries could be consistent among volume creation with quota and 
> update quota
> -
>
> Key: HDDS-3401
> URL: https://issues.apache.org/jira/browse/HDDS-3401
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om
>Affects Versions: 0.5.0
>Reporter: Srinivasu Majeti
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> 2020-04-14 09:44:55,089 | INFO  | OMAudit | user=root | ip=172.25.40.156 | 
> op=CREATE_VOLUME
> {admin=root, owner=hdfs, volume=hive2, creationTime=1586857495055, 
> *quotaInBytes=1099511627776*, objectID=1792, updateID=7}
> | ret=SUCCESS |
> 2020-04-14 09:58:09,634 | INFO  | OMAudit | user=root | ip=172.25.40.156 | 
> op=SET_QUOTA
> {volume=hive, *quota=536870912000*}
> | ret=SUCCESS |
>  
> OMVolumeSetQuotaRequest.java -> auditMap.put(OzoneConsts.QUOTA,     
> String.valueOf(setVolumePropertyRequest.getQuotaInBytes()));
>  
> We can use OzoneConsts.QUOTA_IN_BYTES instead of OzoneConsts.QUOTA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3401) Ozone audit entries could be consistent among volume creation with quota and update quota

2020-04-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3401:
-
Labels: pull-request-available  (was: )

> Ozone audit entries could be consistent among volume creation with quota and 
> update quota
> -
>
> Key: HDDS-3401
> URL: https://issues.apache.org/jira/browse/HDDS-3401
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om
>Affects Versions: 0.5.0
>Reporter: Srinivasu Majeti
>Assignee: Bharat Viswanadham
>Priority: Minor
>  Labels: pull-request-available
>
> 2020-04-14 09:44:55,089 | INFO  | OMAudit | user=root | ip=172.25.40.156 | 
> op=CREATE_VOLUME
> {admin=root, owner=hdfs, volume=hive2, creationTime=1586857495055, 
> *quotaInBytes=1099511627776*, objectID=1792, updateID=7}
> | ret=SUCCESS |
> 2020-04-14 09:58:09,634 | INFO  | OMAudit | user=root | ip=172.25.40.156 | 
> op=SET_QUOTA
> {volume=hive, *quota=536870912000*}
> | ret=SUCCESS |
>  
> OMVolumeSetQuotaRequest.java -> auditMap.put(OzoneConsts.QUOTA,     
> String.valueOf(setVolumePropertyRequest.getQuotaInBytes()));
>  
> We can use OzoneConsts.QUOTA_IN_BYTES instead of OzoneConsts.QUOTA



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #832: HDDS-3401. Ozone audit entries could be consistent among volume creation with quota and update quota

2020-04-16 Thread GitBox
bharatviswa504 commented on issue #832: HDDS-3401. Ozone audit entries could be 
consistent among volume creation with quota and update quota
URL: https://github.com/apache/hadoop-ozone/pull/832#issuecomment-614920215
 
 
   Thank You @arp7 for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #832: HDDS-3401. Ozone audit entries could be consistent among volume creation with quota and update quota

2020-04-16 Thread GitBox
bharatviswa504 merged pull request #832: HDDS-3401. Ozone audit entries could 
be consistent among volume creation with quota and update quota
URL: https://github.com/apache/hadoop-ozone/pull/832
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3408) Rename ChunkLayOutVersion -> ContainerLayOutVersion

2020-04-16 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-3408:


 Summary: Rename ChunkLayOutVersion -> ContainerLayOutVersion
 Key: HDDS-3408
 URL: https://issues.apache.org/jira/browse/HDDS-3408
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


Because this layOutversion defines/handles any changes related to 
chunks/blocks/container data. So, this layOutVersion on whole handles entire 
Container. This jira proposes to rename ChunkLayOutVersion -> 
ContainerLayOutVersion. In this way, it will provide a clear explanation of 
this layOutVersion field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3408) Rename ChunkLayOutVersion -> ContainerLayOutVersion

2020-04-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham reassigned HDDS-3408:


Assignee: Bharat Viswanadham

> Rename ChunkLayOutVersion -> ContainerLayOutVersion
> ---
>
> Key: HDDS-3408
> URL: https://issues.apache.org/jira/browse/HDDS-3408
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> Because this layOutversion defines/handles any changes related to 
> chunks/blocks/container data. So, this layOutVersion on whole handles entire 
> Container. This jira proposes to rename ChunkLayOutVersion -> 
> ContainerLayOutVersion. In this way, it will provide a clear explanation of 
> this layOutVersion field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #742: HDDS-3217. Datanode startup is slow due to iterating container DB 2-3 times.

2020-04-16 Thread GitBox
bharatviswa504 commented on issue #742: HDDS-3217. Datanode startup is slow due 
to iterating container DB 2-3 times.
URL: https://github.com/apache/hadoop-ozone/pull/742#issuecomment-614904094
 
 
   Thank You @bshashikant for the review.
   Added code changes to handle upgrade. This is brought during an offline 
discussion with @hanishakoneru .
   
   @hanishakoneru Handled upgrade scenario also. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-16 Thread GitBox
xiaoyuyao commented on a change in pull request #696: HDDS-3056. Allow users to 
list volumes they have access to, and optionally allow all users to list all 
volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#discussion_r409834931
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/VolumeManagerImpl.java
 ##
 @@ -453,23 +449,11 @@ public boolean checkVolumeAccess(String volume, 
OzoneAclInfo userAcl)
   @Override
   public List listVolumes(String userName,
   String prefix, String startKey, int maxKeys) throws IOException {
-metadataManager.getLock().acquireLock(USER_LOCK, userName);
+metadataManager.getLock().acquireWriteLock(USER_LOCK, userName);
 
 Review comment:
   Do we really need a write lock for list volume operation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #696: HDDS-3056. Allow users to list volumes they have access to, and optionally allow all users to list all volumes

2020-04-16 Thread GitBox
xiaoyuyao commented on a change in pull request #696: HDDS-3056. Allow users to 
list volumes they have access to, and optionally allow all users to list all 
volumes
URL: https://github.com/apache/hadoop-ozone/pull/696#discussion_r409833261
 
 

 ##
 File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/om/TestOzoneManagerListVolumes.java
 ##
 @@ -0,0 +1,238 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om;
+
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Set;
+import java.util.UUID;
+
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.ozone.MiniOzoneCluster;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.client.ObjectStore;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.client.OzoneVolume;
+import org.apache.hadoop.ozone.client.protocol.ClientProtocol;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.apache.hadoop.ozone.security.acl.OzoneObjInfo;
+import org.apache.hadoop.security.UserGroupInformation;
+
+import static 
org.apache.hadoop.hdds.scm.ScmConfigKeys.OZONE_SCM_RATIS_PIPELINE_LIMIT;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_AUTHORIZER_CLASS_NATIVE;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ACL_ENABLED;
+import static org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_ADMINISTRATORS;
+import static 
org.apache.hadoop.ozone.OzoneConfigKeys.OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS;
+import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_VOLUME_LISTALL_ALLOWED;
+import static org.apache.hadoop.ozone.security.acl.OzoneObj.StoreType.OZONE;
+import org.junit.Assert;
+import org.junit.Before;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.Timeout;
+
+/**
+ * Test OzoneManager list volume operation under combinations of configs.
+ */
+public class TestOzoneManagerListVolumes {
+
+  @Rule
+  public Timeout timeout = new Timeout(120_000);
+
+  private UserGroupInformation loginUser;
+  private UserGroupInformation user1 =
+  UserGroupInformation.createRemoteUser("user1");  // Admin user
+  private UserGroupInformation user2 =
+  UserGroupInformation.createRemoteUser("user2");  // Non-admin user
+
+  @Before
+  public void init() throws Exception {
+loginUser = UserGroupInformation.getLoginUser();
+  }
+
+  /**
+   * Create a MiniDFSCluster for testing.
+   */
+  private MiniOzoneCluster startCluster(boolean aclEnabled,
+  boolean volListAllAllowed) throws Exception {
+
+OzoneConfiguration conf = new OzoneConfiguration();
+String clusterId = UUID.randomUUID().toString();
+String scmId = UUID.randomUUID().toString();
+String omId = UUID.randomUUID().toString();
+conf.setInt(OZONE_OPEN_KEY_EXPIRE_THRESHOLD_SECONDS, 2);
+conf.set(OZONE_ADMINISTRATORS, "user1");
+conf.setInt(OZONE_SCM_RATIS_PIPELINE_LIMIT, 10);
+
+// Use native impl here, default impl doesn't do actual checks
+conf.set(OZONE_ACL_AUTHORIZER_CLASS, OZONE_ACL_AUTHORIZER_CLASS_NATIVE);
+// Note: OM doesn't support live config reloading
+conf.setBoolean(OZONE_ACL_ENABLED, aclEnabled);
+conf.setBoolean(OZONE_OM_VOLUME_LISTALL_ALLOWED, volListAllAllowed);
+
+MiniOzoneCluster cluster = MiniOzoneCluster.newBuilder(conf)
+.setClusterId(clusterId).setScmId(scmId).setOmId(omId).build();
+cluster.waitForClusterToBeReady();
+
+// loginUser is the user running this test.
+// Implication: loginUser is automatically added to the OM admin list.
+UserGroupInformation.setLoginUser(loginUser);
+// Create volumes with non-default owners and ACLs
+OzoneClient client = cluster.getClient();
+ObjectStore objectStore = client.getObjectStore();
+
+/* r = READ, w = WRITE, c = CREATE, d = DELETE
+   l = LIST, a = ALL, n = NONE, x = READ_ACL, y = WRITE_ACL */
+String aclUser1All = "user:user1:a";

[jira] [Resolved] (HDDS-3392) OM create key/file should not generate different data encryption key during validateAndUpdateCache

2020-04-16 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal resolved HDDS-3392.
-
   Fix Version/s: 0.6.0
Target Version/s:   (was: 0.6.0)
  Resolution: Fixed

> OM create key/file should not generate different data encryption key during 
> validateAndUpdateCache
> --
>
> Key: HDDS-3392
> URL: https://issues.apache.org/jira/browse/HDDS-3392
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 0.5.0
>Reporter: Nilotpal Nandi
>Assignee: Bharat Viswanadham
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The problem with generate different Data encryption key for the same file 
> across different OM instances are that when the OM leader changes, the client 
> may not be able to read the data correctly. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
bharatviswa504 merged pull request #830: HDDS-3392.OM create key/file should 
not generate different data encryption key during validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
bharatviswa504 commented on issue #830: HDDS-3392.OM create key/file should not 
generate different data encryption key during validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830#issuecomment-614852947
 
 
   Thank You @xiaoyuyao for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
xiaoyuyao commented on a change in pull request #830: HDDS-3392.OM create 
key/file should not generate different data encryption key during 
validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830#discussion_r409799258
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -213,10 +213,9 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   OmBucketInfo bucketInfo = omMetadataManager.getBucketTable().get(
   omMetadataManager.getBucketKey(volumeName, bucketName));
-  encryptionInfo = getFileEncryptionInfo(ozoneManager, bucketInfo);
 
   omKeyInfo = prepareKeyInfo(omMetadataManager, keyArgs, dbKeyInfo,
-  keyArgs.getDataSize(), locations, encryptionInfo.orNull(),
+  keyArgs.getDataSize(), locations,  getFileEncryptionInfo(keyArgs),
 
 Review comment:
   Ok to me.  Let's fix that later. +1. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
bharatviswa504 commented on a change in pull request #830: HDDS-3392.OM create 
key/file should not generate different data encryption key during 
validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830#discussion_r409798023
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
 ##
 @@ -451,4 +446,62 @@ protected void checkKeyAclsInOpenKeyTable(OzoneManager 
ozoneManager,
 checkKeyAcls(ozoneManager, volume, bucket, keyNameForAclCheck,
   aclType, OzoneObj.ResourceType.KEY);
   }
+
+  /**
+   * Generate EncryptionInfo and set in to newKeyArgs.
+   * @param keyArgs
+   * @param newKeyArgs
+   * @param ozoneManager
+   */
+  protected void generateRequiredEncryptionInfo(KeyArgs keyArgs,
+  KeyArgs.Builder newKeyArgs, OzoneManager ozoneManager)
+  throws IOException {
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+
+boolean acquireLock = false;
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+// When TDE is enabled, we are doing a DB read in pre-execute. As for
+// most of the operations we don't read from DB because of our isLeader
+// semantics. This issue will be solved with implementation of leader
+// leases which provider strong leader semantics in the system.
+
+// If KMS is not enabled, follow the normal approach of execution of not
+// reading DB in pre-execute.
+if (ozoneManager.getKmsProvider() != null) {
+  try {
+acquireLock = omMetadataManager.getLock().acquireReadLock(
+BUCKET_LOCK, volumeName, bucketName);
+
+
+OmBucketInfo bucketInfo = omMetadataManager.getBucketTable().get(
+omMetadataManager.getBucketKey(volumeName, bucketName));
+
+
+// Don't throw exception of bucket not found when bucketinfo is not
+// null. If bucketinfo is null, later when request
+// is submitted and if bucket does not really exist it will fail in
+// applyTransaction step. Why we are doing this is if OM thinks it is
+// the leader, but it is not, we don't want to fail request in this
+// case. As anyway when it submits request to ratis it will fail with
 
 Review comment:
   I will open a jira for this also.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3407) Reduce number of parameters in prepareKeyInfo in OMKeyRequest

2020-04-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3407:
-
Issue Type: Improvement  (was: Bug)
  Priority: Minor  (was: Major)

> Reduce number of parameters in prepareKeyInfo in OMKeyRequest
> -
>
> Key: HDDS-3407
> URL: https://issues.apache.org/jira/browse/HDDS-3407
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Priority: Minor
>
> https://github.com/apache/hadoop-ozone/pull/830#discussion_r409733651
> As now we pass KeyArgs which has size and encryptionInfo, we don't need to 
> pass these parameters seperately again. And also see if any more parameters 
> does not need to be passed and see if we can remove Checkstyle warning 
> @SuppressWarnings("parameternumber"). As checkstyle warns if method has more 
> than 7 parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3407) Reduce number of parameters in prepareKeyInfo in OMKeyRequest

2020-04-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3407:
-
Description: 
https://github.com/apache/hadoop-ozone/pull/830#discussion_r409733651

As now we pass KeyArgs which has size and encryptionInfo, we don't need to pass 
these parameters seperately again. And also see if any more parameters does not 
need to be passed and see if we can remove Checkstyle warning 
@SuppressWarnings("parameternumber"). As checkstyle warns if method has more 
than 7 parameters.

  was:
https://github.com/apache/hadoop-ozone/pull/830#discussion_r409733651

As now we pass KeyArgs which has size and encryptionInfo, we don't need to pass 
these parameters seperately again. And also see if any more parameters cannot 
be passed and see if we can remove Checkstyle warning 
@SuppressWarnings("parameternumber")


> Reduce number of parameters in prepareKeyInfo in OMKeyRequest
> -
>
> Key: HDDS-3407
> URL: https://issues.apache.org/jira/browse/HDDS-3407
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>
> https://github.com/apache/hadoop-ozone/pull/830#discussion_r409733651
> As now we pass KeyArgs which has size and encryptionInfo, we don't need to 
> pass these parameters seperately again. And also see if any more parameters 
> does not need to be passed and see if we can remove Checkstyle warning 
> @SuppressWarnings("parameternumber"). As checkstyle warns if method has more 
> than 7 parameters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3407) Reduce number of parameters in prepareKeyInfo in OMKeyRequest

2020-04-16 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-3407:


 Summary: Reduce number of parameters in prepareKeyInfo in 
OMKeyRequest
 Key: HDDS-3407
 URL: https://issues.apache.org/jira/browse/HDDS-3407
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


https://github.com/apache/hadoop-ozone/pull/830#discussion_r409733651

As now we pass KeyArgs which has size and encryptionInfo, we don't need to pass 
these parameters seperately again. And also see if any more parameters cannot 
be passed and see if we can remove Checkstyle warning 
@SuppressWarnings("parameternumber")



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
bharatviswa504 commented on a change in pull request #830: HDDS-3392.OM create 
key/file should not generate different data encryption key during 
validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830#discussion_r409792436
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -213,10 +213,9 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   OmBucketInfo bucketInfo = omMetadataManager.getBucketTable().get(
   omMetadataManager.getBucketKey(volumeName, bucketName));
-  encryptionInfo = getFileEncryptionInfo(ozoneManager, bucketInfo);
 
   omKeyInfo = prepareKeyInfo(omMetadataManager, keyArgs, dbKeyInfo,
-  keyArgs.getDataSize(), locations, encryptionInfo.orNull(),
+  keyArgs.getDataSize(), locations,  getFileEncryptionInfo(keyArgs),
 
 Review comment:
   Jira to track this
   https://issues.apache.org/jira/browse/HDDS-3407


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
bharatviswa504 commented on a change in pull request #830: HDDS-3392.OM create 
key/file should not generate different data encryption key during 
validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830#discussion_r409791272
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -213,10 +213,9 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   OmBucketInfo bucketInfo = omMetadataManager.getBucketTable().get(
   omMetadataManager.getBucketKey(volumeName, bucketName));
-  encryptionInfo = getFileEncryptionInfo(ozoneManager, bucketInfo);
 
   omKeyInfo = prepareKeyInfo(omMetadataManager, keyArgs, dbKeyInfo,
-  keyArgs.getDataSize(), locations, encryptionInfo.orNull(),
+  keyArgs.getDataSize(), locations,  getFileEncryptionInfo(keyArgs),
 
 Review comment:
   By removing the encryptionInfo and size, we cannot remove 
@SuppressWarnings("parameternumber"). I will leave for now. I will open a new 
jira to refactor this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3291) Write operation when both OM followers are shutdown

2020-04-16 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085190#comment-17085190
 ] 

Arpit Agarwal commented on HDDS-3291:
-

This was re-committed via GitHub PR #815.

> Write operation when both OM followers are shutdown
> ---
>
> Key: HDDS-3291
> URL: https://issues.apache.org/jira/browse/HDDS-3291
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> steps taken :
> --
> 1. In OM HA environment, shutdown both OM followers.
> 2. Start PUT key operation.
> PUT key operation is hung.
> Cluster details : 
> https://quasar-vwryte-1.quasar-vwryte.root.hwx.site:7183/cmf/home
> Snippet of OM log on LEADER:
> {code:java}
> 2020-03-24 04:16:46,249 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,249 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,250 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,250 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,750 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,750 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,750 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,750 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,250 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,251 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,251 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,251 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,751 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,751 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,752 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,752 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:48,252 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:48,252 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:48,252 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:48,252 INFO org.apache.ratis.server.impl.FollowerInfo: 

[GitHub] [hadoop-ozone] prashantpogde commented on issue #828: HDDS-3002. NFS mountd support for Ozone

2020-04-16 Thread GitBox
prashantpogde commented on issue #828: HDDS-3002. NFS mountd support for Ozone
URL: https://github.com/apache/hadoop-ozone/pull/828#issuecomment-614834393
 
 
   yup @mukul1987, looking at failures and fixing them. I will upload another 
patch.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #837: HDDS-3400. Extract test utilities to separate module

2020-04-16 Thread GitBox
adoroszlai opened a new pull request #837: HDDS-3400. Extract test utilities to 
separate module
URL: https://github.com/apache/hadoop-ozone/pull/837
 
 
   ## What changes were proposed in this pull request?
   
   Create `hadoop-hdds/test-utils` module for test utilities (eg. 
`GenericTestUtils` and `LambdaTestUtils`).  This should not depend on any other 
HDDS/Ozone modules: the goal is to be able to use test listeners defined in 
this module (currently only `TimedOutTestsListener`) in all other modules.
   
   https://issues.apache.org/jira/browse/HDDS-3400
   
   ## How was this patch tested?
   
   CI:
   https://github.com/adoroszlai/hadoop-ozone/runs/592403416
   
   Applied the addition of listener from #813, 
[verified](https://github.com/adoroszlai/hadoop-ozone/runs/592999817) that no 
[`ClassNotFoundException`](https://github.com/apache/hadoop-ozone/pull/813/checks?check_run_id=589621137)
 happens.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3400) Extract test utilities to separate module

2020-04-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3400:
-
Labels: pull-request-available  (was: )

> Extract test utilities to separate module
> -
>
> Key: HDDS-3400
> URL: https://issues.apache.org/jira/browse/HDDS-3400
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: build, test
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>
> TimedOutTestsListener cannot be added globally because it is in 
> hadoop-hdds-common, which is not accessible in hadoop-hdds-config (since the 
> latter is a dependency of the former).  The listener and related classes 
> (GenericTestUtils, etc.) should be extracted into a separate module to be 
> used by all others.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3406) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread Hanisha Koneru (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085181#comment-17085181
 ] 

Hanisha Koneru commented on HDDS-3406:
--

Thanks [~ayushtkn]. Added to Common by mistake.

> Remove RetryInvocation INFO logging from ozone CLI output
> -
>
> Key: HDDS-3406
> URL: https://issues.apache.org/jira/browse/HDDS-3406
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>
> In OM HA failover proxy provider, RetryInvocationHandler logs error stack 
> trace when client tries contacting non-leader OM. Instead we can just log a 
> message that the failover will happen and not include the stack trace.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3. at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
>  at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
>  at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while 
> invoking $Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 
> failover attempts. Trying to failover immediately.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3406) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-3406:
-
Description: 
In OM HA failover proxy provider, RetryInvocationHandler logs error stack trace 
when client tries contacting non-leader OM. Instead we can just log a message 
that the failover will happen and not include the stack trace.
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om3. at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while invoking 
$Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 failover 
attempts. Trying to failover immediately.
{code}

  was:
In OM HA failover proxy provider, RetryInvocationHandler logs error stack trace 
when client tries contacting non-leader OM. This error message can be 
suppressed as the failover would happen to leader OM.
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om3. at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while invoking 
$Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 failover 
attempts. Trying to failover immediately.
{code}


> Remove RetryInvocation INFO logging from ozone CLI output
> -
>
> Key: HDDS-3406
> URL: https://issues.apache.org/jira/browse/HDDS-3406
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>
> In OM HA failover proxy provider, RetryInvocationHandler logs error stack 
> trace when client tries contacting non-leader OM. Instead we can just log a 
> message that the failover will happen and not include the stack trace.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3. at 
> 

[jira] [Updated] (HDDS-3406) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-3406:
-
Component/s: Ozone Client
 om

> Remove RetryInvocation INFO logging from ozone CLI output
> -
>
> Key: HDDS-3406
> URL: https://issues.apache.org/jira/browse/HDDS-3406
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: om, Ozone Client
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>
> In OM HA failover proxy provider, RetryInvocationHandler logs error stack 
> trace when client tries contacting non-leader OM. Instead we can just log a 
> message that the failover will happen and not include the stack trace.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3. at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
>  at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
>  at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while 
> invoking $Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 
> failover attempts. Trying to failover immediately.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3406) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HDDS-3406:
-
Description: 
In OM HA failover proxy provider, RetryInvocationHandler logs error stack trace 
when client tries contacting non-leader OM. This error message can be 
suppressed as the failover would happen to leader OM.
{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om3. at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while invoking 
$Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 failover 
attempts. Trying to failover immediately.
{code}

  was:
In OM HA failover proxy provider, RetryInvocationHandler logs error message 
when client tries contacting non-leader OM. This error message can be 
suppressed as the failover would happen to leader OM.

{code:java}
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om3. at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
 at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
 at 
org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
 at 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
 at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
 at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
 at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while invoking 
$Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 failover 
attempts. Trying to failover immediately.
{code}



> Remove RetryInvocation INFO logging from ozone CLI output
> -
>
> Key: HDDS-3406
> URL: https://issues.apache.org/jira/browse/HDDS-3406
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>
> In OM HA failover proxy provider, RetryInvocationHandler logs error stack 
> trace when client tries contacting non-leader OM. This error message can be 
> suppressed as the failover would happen to leader OM.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3. at 
> 

[jira] [Commented] (HDDS-3406) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085179#comment-17085179
 ] 

Ayush Saxena commented on HDDS-3406:


Seems Not related to Common, Have moved from Common to HDDS.

> Remove RetryInvocation INFO logging from ozone CLI output
> -
>
> Key: HDDS-3406
> URL: https://issues.apache.org/jira/browse/HDDS-3406
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>
> In OM HA failover proxy provider, RetryInvocationHandler logs error message 
> when client tries contacting non-leader OM. This error message can be 
> suppressed as the failover would happen to leader OM.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3. at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
>  at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
>  at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while 
> invoking $Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 
> failover attempts. Trying to failover immediately.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Moved] (HDDS-3406) Remove RetryInvocation INFO logging from ozone CLI output

2020-04-16 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena moved HADOOP-16991 to HDDS-3406:
-

 Key: HDDS-3406  (was: HADOOP-16991)
Target Version/s: 0.6.0  (was: 0.6.0)
Workflow: patch-available, re-open possible  (was: 
no-reopen-closed, patch-avail)
 Project: Hadoop Distributed Data Store  (was: Hadoop Common)

> Remove RetryInvocation INFO logging from ozone CLI output
> -
>
> Key: HDDS-3406
> URL: https://issues.apache.org/jira/browse/HDDS-3406
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Nilotpal Nandi
>Assignee: Hanisha Koneru
>Priority: Major
>
> In OM HA failover proxy provider, RetryInvocationHandler logs error message 
> when client tries contacting non-leader OM. This error message can be 
> suppressed as the failover would happen to leader OM.
> {code:java}
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
>  OM:om2 is not the leader. Suggested leader is OM:om3. at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.createNotLeaderException(OzoneManagerProtocolServerSideTranslatorPB.java:186)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitReadRequestToOM(OzoneManagerProtocolServerSideTranslatorPB.java:174)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.processRequest(OzoneManagerProtocolServerSideTranslatorPB.java:110)
>  at 
> org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
>  at 
> org.apache.hadoop.ozone.protocolPB.OzoneManagerProtocolServerSideTranslatorPB.submitRequest(OzoneManagerProtocolServerSideTranslatorPB.java:98)
>  at 
> org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos$OzoneManagerService$2.callBlockingMethod(OzoneManagerProtocolProtos.java)
>  at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
>  at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876) at 
> org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
>  at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682), while 
> invoking $Proxy16.submitRequest over nodeId=om2,nodeAddress=om2:9862 after 1 
> failover attempts. Trying to failover immediately.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
xiaoyuyao commented on a change in pull request #830: HDDS-3392.OM create 
key/file should not generate different data encryption key during 
validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830#discussion_r409734021
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyRequest.java
 ##
 @@ -451,4 +446,62 @@ protected void checkKeyAclsInOpenKeyTable(OzoneManager 
ozoneManager,
 checkKeyAcls(ozoneManager, volume, bucket, keyNameForAclCheck,
   aclType, OzoneObj.ResourceType.KEY);
   }
+
+  /**
+   * Generate EncryptionInfo and set in to newKeyArgs.
+   * @param keyArgs
+   * @param newKeyArgs
+   * @param ozoneManager
+   */
+  protected void generateRequiredEncryptionInfo(KeyArgs keyArgs,
+  KeyArgs.Builder newKeyArgs, OzoneManager ozoneManager)
+  throws IOException {
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+
+boolean acquireLock = false;
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+
+// When TDE is enabled, we are doing a DB read in pre-execute. As for
+// most of the operations we don't read from DB because of our isLeader
+// semantics. This issue will be solved with implementation of leader
+// leases which provider strong leader semantics in the system.
+
+// If KMS is not enabled, follow the normal approach of execution of not
+// reading DB in pre-execute.
+if (ozoneManager.getKmsProvider() != null) {
+  try {
+acquireLock = omMetadataManager.getLock().acquireReadLock(
+BUCKET_LOCK, volumeName, bucketName);
+
+
+OmBucketInfo bucketInfo = omMetadataManager.getBucketTable().get(
+omMetadataManager.getBucketKey(volumeName, bucketName));
+
+
+// Don't throw exception of bucket not found when bucketinfo is not
+// null. If bucketinfo is null, later when request
+// is submitted and if bucket does not really exist it will fail in
+// applyTransaction step. Why we are doing this is if OM thinks it is
+// the leader, but it is not, we don't want to fail request in this
+// case. As anyway when it submits request to ratis it will fail with
 
 Review comment:
   Sounds good to me. Do we have a JIRA to track the leader lease?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #830: HDDS-3392.OM create key/file should not generate different data encryption key during validateAndUpdateCache.

2020-04-16 Thread GitBox
xiaoyuyao commented on a change in pull request #830: HDDS-3392.OM create 
key/file should not generate different data encryption key during 
validateAndUpdateCache.
URL: https://github.com/apache/hadoop-ozone/pull/830#discussion_r409733651
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/key/OMKeyCreateRequest.java
 ##
 @@ -213,10 +213,9 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 
   OmBucketInfo bucketInfo = omMetadataManager.getBucketTable().get(
   omMetadataManager.getBucketKey(volumeName, bucketName));
-  encryptionInfo = getFileEncryptionInfo(ozoneManager, bucketInfo);
 
   omKeyInfo = prepareKeyInfo(omMetadataManager, keyArgs, dbKeyInfo,
-  keyArgs.getDataSize(), locations, encryptionInfo.orNull(),
+  keyArgs.getDataSize(), locations,  getFileEncryptionInfo(keyArgs),
 
 Review comment:
   NIT: Given we have pass keyArgs, we can reduce the number of arguments here 
to  avoid @SuppressWarnings("parameternumber") for prepareKeyInfo().


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3291) Write operation when both OM followers are shutdown

2020-04-16 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-3291.
--
Resolution: Fixed

> Write operation when both OM followers are shutdown
> ---
>
> Key: HDDS-3291
> URL: https://issues.apache.org/jira/browse/HDDS-3291
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Nilotpal Nandi
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> steps taken :
> --
> 1. In OM HA environment, shutdown both OM followers.
> 2. Start PUT key operation.
> PUT key operation is hung.
> Cluster details : 
> https://quasar-vwryte-1.quasar-vwryte.root.hwx.site:7183/cmf/home
> Snippet of OM log on LEADER:
> {code:java}
> 2020-03-24 04:16:46,249 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,249 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,250 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,250 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,750 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,750 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:46,750 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:46,750 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,250 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,251 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,251 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,251 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,751 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,751 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:47,752 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:47,752 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:48,252 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om2-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:48,252 WARN org.apache.ratis.grpc.server.GrpcLogAppender: 
> om1@group-9F198C4C3682->om3-AppendLogResponseHandler: Failed appendEntries: 
> org.apache.ratis.thirdparty.io.grpc.StatusRuntimeException: UNAVAILABLE: io 
> exception
> 2020-03-24 04:16:48,252 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om2: nextIndex: updateUnconditionally 360 -> 359
> 2020-03-24 04:16:48,252 INFO org.apache.ratis.server.impl.FollowerInfo: 
> om1@group-9F198C4C3682->om3: nextIndex: 

[GitHub] [hadoop-ozone] bharatviswa504 commented on issue #815: HDDS-3291. Write operation when both OM followers are shutdown.

2020-04-16 Thread GitBox
bharatviswa504 commented on issue #815: HDDS-3291. Write operation when both OM 
followers are shutdown.
URL: https://github.com/apache/hadoop-ozone/pull/815#issuecomment-614785352
 
 
   Thank You @arp7 for the review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #815: HDDS-3291. Write operation when both OM followers are shutdown.

2020-04-16 Thread GitBox
bharatviswa504 merged pull request #815: HDDS-3291. Write operation when both 
OM followers are shutdown.
URL: https://github.com/apache/hadoop-ozone/pull/815
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #700: HDDS-3172. Use DBStore instead of MetadataStore in SCM

2020-04-16 Thread GitBox
nandakumar131 commented on a change in pull request #700: HDDS-3172. Use 
DBStore instead of MetadataStore in SCM
URL: https://github.com/apache/hadoop-ozone/pull/700#discussion_r409710625
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/metadata/SCMDBDefinition.java
 ##
 @@ -0,0 +1,97 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdds.scm.metadata;
+
+import java.math.BigInteger;
+import java.security.cert.X509Certificate;
+
+import 
org.apache.hadoop.hdds.protocol.proto.StorageContainerDatanodeProtocolProtos.DeletedBlocksTransaction;
+import org.apache.hadoop.hdds.scm.ScmConfigKeys;
+import org.apache.hadoop.hdds.scm.container.ContainerInfo;
+import org.apache.hadoop.hdds.scm.pipeline.Pipeline;
+import org.apache.hadoop.hdds.scm.pipeline.PipelineID;
+import org.apache.hadoop.hdds.utils.db.DBColumnFamilyDefinition;
+import org.apache.hadoop.hdds.utils.db.DBDefinition;
+import org.apache.hadoop.hdds.utils.db.LongCodec;
+
+/**
+ * Class defines the structure and types of the scm.db.
+ */
+public class SCMDBDefinition implements DBDefinition {
+
+  public static final DBColumnFamilyDefinition
+  DELETED_BLOCKS =
+  new DBColumnFamilyDefinition<>(
+  "deletedBlocks",
+  Long.class,
+  new LongCodec(),
+  DeletedBlocksTransaction.class,
+  new DeletedBlocksTransactionCodec());
+
+  public static final DBColumnFamilyDefinition
+  VALID_CERTS =
+  new DBColumnFamilyDefinition<>(
+  "validCerts",
+  BigInteger.class,
+  new BigIntegerCodec(),
+  X509Certificate.class,
+  new X509CertificateCodec());
+
+  public static final DBColumnFamilyDefinition
+  REVOKED_CERTS =
+  new DBColumnFamilyDefinition<>(
+  "revokedCerts",
+  BigInteger.class,
+  new BigIntegerCodec(),
+  X509Certificate.class,
+  new X509CertificateCodec());
+
+  public static final DBColumnFamilyDefinition
+  PIPELINES =
+  new DBColumnFamilyDefinition<>(
+  "pipelines",
+  PipelineID.class,
+  new PipelineIDCodec(),
+  Pipeline.class,
+  new PipelineCodec());
+
+  public static final DBColumnFamilyDefinition
+  CONTAINERS =
+  new DBColumnFamilyDefinition<>(
+  "containers",
+  Long.class,
 
 Review comment:
   Can we use `ContainerID` instead of `Long` here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #700: HDDS-3172. Use DBStore instead of MetadataStore in SCM

2020-04-16 Thread GitBox
nandakumar131 commented on a change in pull request #700: HDDS-3172. Use 
DBStore instead of MetadataStore in SCM
URL: https://github.com/apache/hadoop-ozone/pull/700#discussion_r409716835
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
 ##
 @@ -105,16 +107,25 @@ public void setUp() throws Exception {
 // Override the default Node Manager in SCM with this Mock Node Manager.
 nodeManager = new MockNodeManager(true, 10);
 eventQueue = new EventQueue();
+
+scmMetadataStore = new SCMMetadataStoreRDBImpl(conf);
+scmMetadataStore.start(conf);
 pipelineManager =
-new SCMPipelineManager(conf, nodeManager, eventQueue);
+new SCMPipelineManager(conf, nodeManager,
+SCMDBDefinition.PIPELINES.getTable(scmMetadataStore.getStore()),
+eventQueue);
 pipelineManager.allowPipelineCreation();
+
 PipelineProvider mockRatisProvider =
 new MockRatisPipelineProvider(nodeManager,
 pipelineManager.getStateManager(), conf, eventQueue);
 pipelineManager.setPipelineProvider(HddsProtos.ReplicationType.RATIS,
 mockRatisProvider);
 SCMContainerManager containerManager =
-new SCMContainerManager(conf, pipelineManager);
+new SCMContainerManager(conf,
+SCMDBDefinition.CONTAINERS.getTable(scmMetadataStore.getStore()),
 
 Review comment:
   ```suggestion
   scmMetadataStore.getContainerTable(),
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #700: HDDS-3172. Use DBStore instead of MetadataStore in SCM

2020-04-16 Thread GitBox
nandakumar131 commented on a change in pull request #700: HDDS-3172. Use 
DBStore instead of MetadataStore in SCM
URL: https://github.com/apache/hadoop-ozone/pull/700#discussion_r409716530
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/block/TestBlockManager.java
 ##
 @@ -105,16 +107,25 @@ public void setUp() throws Exception {
 // Override the default Node Manager in SCM with this Mock Node Manager.
 nodeManager = new MockNodeManager(true, 10);
 eventQueue = new EventQueue();
+
+scmMetadataStore = new SCMMetadataStoreRDBImpl(conf);
+scmMetadataStore.start(conf);
 pipelineManager =
-new SCMPipelineManager(conf, nodeManager, eventQueue);
+new SCMPipelineManager(conf, nodeManager,
+SCMDBDefinition.PIPELINES.getTable(scmMetadataStore.getStore()),
 
 Review comment:
   ```suggestion
   scmMetadataStore.getPipelineTable(),
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] nandakumar131 commented on a change in pull request #700: HDDS-3172. Use DBStore instead of MetadataStore in SCM

2020-04-16 Thread GitBox
nandakumar131 commented on a change in pull request #700: HDDS-3172. Use 
DBStore instead of MetadataStore in SCM
URL: https://github.com/apache/hadoop-ozone/pull/700#discussion_r409718432
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/test/java/org/apache/hadoop/hdds/scm/pipeline/TestSCMPipelineManager.java
 ##
 @@ -78,17 +80,26 @@ public void setUp() throws Exception {
   throw new IOException("Unable to create test directory path");
 }
 nodeManager = new MockNodeManager(true, 20);
+
+store = new SCMDBDefinition().createDBStore(conf);
+
+
   }
 
   @After
-  public void cleanup() {
+  public void cleanup() throws Exception {
+store.close();
 FileUtil.fullyDelete(testDir);
   }
 
   @Test
   public void testPipelineReload() throws IOException {
 SCMPipelineManager pipelineManager =
-new SCMPipelineManager(conf, nodeManager, new EventQueue());
+
 
 Review comment:
   empty line can be removed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3172) Use DBStore instead of MetadataStore in SCM

2020-04-16 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-3172:
--
Status: Patch Available  (was: Open)

> Use DBStore instead of  MetadataStore in SCM
> 
>
> Key: HDDS-3172
> URL: https://issues.apache.org/jira/browse/HDDS-3172
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Critical
>  Labels: backward-incompatible, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The MetadataStore interface provides a generic view to any key / value store 
> with a LevelDB and RocksDB implementation.
> Since the early version of MetadataStore we also go the DBStore interface 
> which is more andvanced (it supports DB profiles and ColumnFamilies).
> To simplify the introduction of new features (like versioning or rocksdb 
> tuning) we should use the new interface everywhere instead of the old 
> interface.
> We should update SCM and Datanode to use the DBStore instead of 
> MetadataStore. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3172) Use DBStore instead of MetadataStore in SCM

2020-04-16 Thread Nanda kumar (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nanda kumar updated HDDS-3172:
--
Labels: backward-incompatible pull-request-available  (was: 
pull-request-available)

> Use DBStore instead of  MetadataStore in SCM
> 
>
> Key: HDDS-3172
> URL: https://issues.apache.org/jira/browse/HDDS-3172
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Critical
>  Labels: backward-incompatible, pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The MetadataStore interface provides a generic view to any key / value store 
> with a LevelDB and RocksDB implementation.
> Since the early version of MetadataStore we also go the DBStore interface 
> which is more andvanced (it supports DB profiles and ColumnFamilies).
> To simplify the introduction of new features (like versioning or rocksdb 
> tuning) we should use the new interface everywhere instead of the old 
> interface.
> We should update SCM and Datanode to use the DBStore instead of 
> MetadataStore. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #749: HDDS-3322. StandAlone Pipelines are created in an infinite loop

2020-04-16 Thread GitBox
hanishakoneru commented on issue #749: HDDS-3322. StandAlone Pipelines are 
created in an infinite loop
URL: https://github.com/apache/hadoop-ozone/pull/749#issuecomment-614744200
 
 
   Thanks @bharatviswa504 and @vivekratnavel for the reviews.
   
   > One question, so you have tested the scenario by changing Replication Type 
(ozone.replication.type) to STAND_ALONE.
   
   Yup.
   
   
   > And also now we have support for Ratis with 1 and 3 factor. I think now we 
can remove SimplePipelineProvider, Any thoughts?
   
   Agree that we can remove it eventually.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on a change in pull request #749: HDDS-3322. StandAlone Pipelines are created in an infinite loop

2020-04-16 Thread GitBox
hanishakoneru commented on a change in pull request #749: HDDS-3322. StandAlone 
Pipelines are created in an infinite loop
URL: https://github.com/apache/hadoop-ozone/pull/749#discussion_r409672521
 
 

 ##
 File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SimplePipelineProvider.java
 ##
 @@ -32,18 +31,18 @@
 /**
  * Implements Api for creating stand alone pipelines.
  */
-public class SimplePipelineProvider implements PipelineProvider {
+public class SimplePipelineProvider extends PipelineProvider {
 
-  private final NodeManager nodeManager;
-
-  public SimplePipelineProvider(NodeManager nodeManager) {
-this.nodeManager = nodeManager;
+  public SimplePipelineProvider(NodeManager nodeManager,
+  PipelineStateManager stateManager) {
+super(nodeManager, stateManager);
   }
 
   @Override
   public Pipeline create(ReplicationFactor factor) throws IOException {
-List dns =
-nodeManager.getNodes(NodeState.HEALTHY);
+List dns = pickNodesNeverUsed(ReplicationType.STAND_ALONE,
 
 Review comment:
   Updated to create only factor ONE pipelines for StandAlone type.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3364) Increase test timeout for ozonesecure-security robot tests

2020-04-16 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru resolved HDDS-3364.
--
Resolution: Not A Problem

> Increase test timeout for ozonesecure-security robot tests
> --
>
> Key: HDDS-3364
> URL: https://issues.apache.org/jira/browse/HDDS-3364
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>  Labels: pull-request-available
> Attachments: docker-ozonesecure-ozonesecure-security-scm.log, 
> robot-ozonesecure-ozonesecure-security-scm.xml
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In few CI runs, ozone-security robot tests are timing out at 5 minutes. 
> [https://github.com/apache/hadoop-ozone/pull/784/checks?check_run_id=569548444]
> We should increase the timeout to avoid this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru closed pull request #791: HDDS-3364. Increase test timeout for ozonesecure-security robot tests

2020-04-16 Thread GitBox
hanishakoneru closed pull request #791: HDDS-3364. Increase test timeout for 
ozonesecure-security robot tests
URL: https://github.com/apache/hadoop-ozone/pull/791
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] hanishakoneru commented on issue #791: HDDS-3364. Increase test timeout for ozonesecure-security robot tests

2020-04-16 Thread GitBox
hanishakoneru commented on issue #791: HDDS-3364. Increase test timeout for 
ozonesecure-security robot tests
URL: https://github.com/apache/hadoop-ozone/pull/791#issuecomment-614736236
 
 
   Thank you @adoroszlai and @arp7 for the discussion. Closing this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3405) Tool for Listing keys in OpenKeyTable

2020-04-16 Thread Sadanand Shenoy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sadanand Shenoy updated HDDS-3405:
--
Labels:   (was: ozone)

> Tool for Listing keys in OpenKeyTable
> -
>
> Key: HDDS-3405
> URL: https://issues.apache.org/jira/browse/HDDS-3405
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Affects Versions: 0.6.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Minor
>
> This tool lists keys present in the OpenKeyTable .The tool can be used to 
> debug when keys  don't show up on OzoneManager after writing them .There is a 
> chance that the key has not gotten committed and will show up in the 
> OpenKeyTable through the tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3403) Generate ozone specific version from type in FSProto.proto

2020-04-16 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek updated HDDS-3403:
--
Description: 
FSProtos.proto is copied from the Hadoop and used during the proto file 
generation **BUT** the types defined in FSProtos.proto are generated by Hadoop 
subproject.

This makes it hard to use Ozone with older Hadoop versions as the types of 
FSProtos are available only from Hadoop 3.x. 

An easy fix is to generate our own version based on the existing FSProtos.proto

  was:
FSProtos.proto is copied from the Hadoop and used during the proto file 
generation **BUT** the types defined in FSProtos.proto are generated by Hadoop 
subproject.

This makes it hard to use Ozone with older Hadoop versions as the types of 
FSProtos are available only from Hadoop 3.x. 

An easy fix is to generate our old version based on the existing FSProtos.proto


> Generate ozone specific version from type in FSProto.proto
> --
>
> Key: HDDS-3403
> URL: https://issues.apache.org/jira/browse/HDDS-3403
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>
> FSProtos.proto is copied from the Hadoop and used during the proto file 
> generation **BUT** the types defined in FSProtos.proto are generated by 
> Hadoop subproject.
> This makes it hard to use Ozone with older Hadoop versions as the types of 
> FSProtos are available only from Hadoop 3.x. 
> An easy fix is to generate our own version based on the existing 
> FSProtos.proto



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #836: Hdds 3403

2020-04-16 Thread GitBox
elek opened a new pull request #836: Hdds 3403
URL: https://github.com/apache/hadoop-ozone/pull/836
 
 
   ## What changes were proposed in this pull request?
   
   FSProtos.proto is copied from the Hadoop and used during the proto file 
generation *BUT* the types defined in FSProtos.proto are generated by Hadoop 
subproject.
   
   This makes it hard to use Ozone with older Hadoop versions as the types of 
FSProtos are available only from Hadoop 3.x.
   
   An easy fix is to generate our own version based on the existing 
FSProtos.proto
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3403
   
   ## How was this patch tested?
   
   CI tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3405) Tool for Listing keys in OpenKeyTable

2020-04-16 Thread Sadanand Shenoy (Jira)
Sadanand Shenoy created HDDS-3405:
-

 Summary: Tool for Listing keys in OpenKeyTable
 Key: HDDS-3405
 URL: https://issues.apache.org/jira/browse/HDDS-3405
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
Affects Versions: 0.6.0
Reporter: Sadanand Shenoy


This tool lists keys present in the OpenKeyTable .The tool can be used to debug 
when keys  don't show up on OzoneManager after writing them .There is a chance 
that the key has not gotten committed and will show up in the OpenKeyTable 
through the tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-3405) Tool for Listing keys in OpenKeyTable

2020-04-16 Thread Sadanand Shenoy (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sadanand Shenoy reassigned HDDS-3405:
-

Assignee: Sadanand Shenoy

> Tool for Listing keys in OpenKeyTable
> -
>
> Key: HDDS-3405
> URL: https://issues.apache.org/jira/browse/HDDS-3405
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Affects Versions: 0.6.0
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Minor
>  Labels: ozone
>
> This tool lists keys present in the OpenKeyTable .The tool can be used to 
> debug when keys  don't show up on OzoneManager after writing them .There is a 
> chance that the key has not gotten committed and will show up in the 
> OpenKeyTable through the tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3399) Update JaegerTracing

2020-04-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3399:
-
Labels: pull-request-available  (was: )

> Update JaegerTracing
> 
>
> Key: HDDS-3399
> URL: https://issues.apache.org/jira/browse/HDDS-3399
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Marton Elek
>Priority: Blocker
>  Labels: pull-request-available
>
> We currently use JaegerTracing 0.34.0. The latest is 1.2.0. We are several 
> versions behind and should update. Note this update requires the latest 
> version fo OpenTracing and has several breaking changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #835: HDDS-3399. Update JaegerTracing

2020-04-16 Thread GitBox
elek opened a new pull request #835: HDDS-3399. Update JaegerTracing
URL: https://github.com/apache/hadoop-ozone/pull/835
 
 
   ## What changes were proposed in this pull request?
   
   We currently use JaegerTracing 0.34.0. The latest is 1.2.0. We are several 
versions behind and should update. Note this update requires the latest version 
fo OpenTracing and has several breaking changes.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3399
   
   ## How was this patch tested?
   
   ```
   cd hadoop-ozone/dist/target/ozone-/compose/ozone
   export COMPOSE_FILE=docker-compose.yaml:profiling.yaml:monitoring.yaml
   export OZONE_REPLICATION_FACTOR=3
   ./run -d
   ```
   Executed freon + s3 crete bucket commands
   
   Traces were displayed in jaeger.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #818: HDDS-3386. Remove unnecessary transitive hadoop-common dependencies on server side (addendum).

2020-04-16 Thread GitBox
adoroszlai commented on issue #818: HDDS-3386. Remove unnecessary transitive 
hadoop-common dependencies on server side (addendum).
URL: https://github.com/apache/hadoop-ozone/pull/818#issuecomment-614661190
 
 
   Sorry, I should have triggered CI here, since unfortunately my PR was merged 
first, introducing a duplicate import (checkstyle violation) without merge 
conflict...  Fixed in follow-up commit 17456bc75bc56cea4f02a7570c0a2a2f535aada0.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3404) Fix TestDnRatisLogParser

2020-04-16 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDDS-3404:
--
Attachment: it-ozone.zip

> Fix TestDnRatisLogParser
> 
>
> Key: HDDS-3404
> URL: https://issues.apache.org/jira/browse/HDDS-3404
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Shashikant Banerjee
>Assignee: Sadanand Shenoy
>Priority: Major
> Fix For: 0.6.0
>
> Attachments: it-ozone.zip
>
>
> {code:java}
> [ERROR] 
> testRatisLogParsing(org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser)  
> Time elapsed: 18.565 s  <<< FAILURE!
> 2816java.lang.AssertionError
> 2817  at org.junit.Assert.fail(Assert.java:86)
> 2818  at org.junit.Assert.assertTrue(Assert.java:41)
> 2819  at org.junit.Assert.assertTrue(Assert.java:52)
> 2820  at 
> org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.testRatisLogParsing(TestDnRatisLogParser.java:75)
> 2821  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2822  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2823  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2824  at java.lang.reflect.Method.invoke(Method.java:498)
> 2825  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> 2826  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2827  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> 2828  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2829  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2830  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2831  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
> 2832  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
> 2833  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
> 2834  at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
> 2835
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3404) Fix TestDnRatisLogParser

2020-04-16 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDDS-3404:
-

 Summary: Fix TestDnRatisLogParser
 Key: HDDS-3404
 URL: https://issues.apache.org/jira/browse/HDDS-3404
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Shashikant Banerjee
Assignee: Sadanand Shenoy
 Fix For: 0.6.0


{code:java}
[ERROR] 
testRatisLogParsing(org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser)  
Time elapsed: 18.565 s  <<< FAILURE!
2816java.lang.AssertionError
2817at org.junit.Assert.fail(Assert.java:86)
2818at org.junit.Assert.assertTrue(Assert.java:41)
2819at org.junit.Assert.assertTrue(Assert.java:52)
2820at 
org.apache.hadoop.ozone.dn.ratis.TestDnRatisLogParser.testRatisLogParsing(TestDnRatisLogParser.java:75)
2821at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2822at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2823at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2824at java.lang.reflect.Method.invoke(Method.java:498)
2825at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
2826at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2827at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
2828at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2829at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2830at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2831at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
2832at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
2833at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
2834at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
2835
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bshashikant commented on a change in pull request #716: HDDS-3155. Improved ozone client flush implementation to make it faster.

2020-04-16 Thread GitBox
bshashikant commented on a change in pull request #716: HDDS-3155. Improved 
ozone client flush implementation to make it faster.
URL: https://github.com/apache/hadoop-ozone/pull/716#discussion_r409558243
 
 

 ##
 File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/ozone/OzoneConfigKeys.java
 ##
 @@ -149,6 +149,17 @@
   public static final TimeDuration OZONE_CLIENT_RETRY_INTERVAL_DEFAULT =
   TimeDuration.valueOf(0, TimeUnit.MILLISECONDS);
 
+  /**
+   * If this value is true, when the client calls the flush() method,
+   * we will checks whether the data in the buffer is greater than
 
 Review comment:
   let's change "we will checks" to "it checks"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3386) Remove unnecessary transitive hadoop-common dependencies on server side (addendum)

2020-04-16 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-3386.

Fix Version/s: 0.6.0
   Resolution: Fixed

> Remove unnecessary transitive hadoop-common dependencies on server side 
> (addendum)
> --
>
> Key: HDDS-3386
> URL: https://issues.apache.org/jira/browse/HDDS-3386
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
> Fix For: 0.6.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In HDDS-3353 we have created two new modules to manage the hadoop-common 
> dependency and excludes related to it in a common place.
> hadoop-hdds-dependency-test, and hadoop-hdds-dependency-server, similarly we 
> had hadoop-hdds-dependency-client before.
> The following modules still depend on hadoop-common for tests instead of the 
> new test dependency:
> hadoop-hdds/client
> hadoop-hdds/container-service
> hadoop-ozone/common
> hadoop-ozone/fault-injection-test/mini-chaos-tests
> hadoop-ozone/insight
> hadoop-ozone/integration-test
> hadoop-ozone/tools
> In hadoop-dependency-client, we exclude named curator packages, similar to 
> the new modules we should instead exclude all curator packages.
> In TestVolumeSetDiskChecks.java we still have an accidental shaded import 
> from curator:  import 
> org.apache.curator.shaded.com.google.common.collect.ImmutableSet;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3386) Remove unnecessary transitive hadoop-common dependencies on server side (addendum)

2020-04-16 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-3386:
---
Labels:   (was: pull-request-available)

> Remove unnecessary transitive hadoop-common dependencies on server side 
> (addendum)
> --
>
> Key: HDDS-3386
> URL: https://issues.apache.org/jira/browse/HDDS-3386
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Istvan Fajth
>Assignee: Istvan Fajth
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> In HDDS-3353 we have created two new modules to manage the hadoop-common 
> dependency and excludes related to it in a common place.
> hadoop-hdds-dependency-test, and hadoop-hdds-dependency-server, similarly we 
> had hadoop-hdds-dependency-client before.
> The following modules still depend on hadoop-common for tests instead of the 
> new test dependency:
> hadoop-hdds/client
> hadoop-hdds/container-service
> hadoop-ozone/common
> hadoop-ozone/fault-injection-test/mini-chaos-tests
> hadoop-ozone/insight
> hadoop-ozone/integration-test
> hadoop-ozone/tools
> In hadoop-dependency-client, we exclude named curator packages, similar to 
> the new modules we should instead exclude all curator packages.
> In TestVolumeSetDiskChecks.java we still have an accidental shaded import 
> from curator:  import 
> org.apache.curator.shaded.com.google.common.collect.ImmutableSet;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on issue #818: HDDS-3386. Remove unnecessary transitive hadoop-common dependencies on server side (addendum).

2020-04-16 Thread GitBox
adoroszlai commented on issue #818: HDDS-3386. Remove unnecessary transitive 
hadoop-common dependencies on server side (addendum).
URL: https://github.com/apache/hadoop-ozone/pull/818#issuecomment-614650436
 
 
   Thanks @fapifta for the contribution.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #818: HDDS-3386. Remove unnecessary transitive hadoop-common dependencies on server side (addendum).

2020-04-16 Thread GitBox
adoroszlai merged pull request #818: HDDS-3386. Remove unnecessary transitive 
hadoop-common dependencies on server side (addendum).
URL: https://github.com/apache/hadoop-ozone/pull/818
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >