[jira] [Updated] (HDDS-4239) Ozone support truncate operation

2020-09-17 Thread runzhiwang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

runzhiwang updated HDDS-4239:
-
Attachment: Ozone Truncate Design-v2.pdf

> Ozone support truncate operation
> 
>
> Key: HDDS-4239
> URL: https://issues.apache.org/jira/browse/HDDS-4239
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: runzhiwang
>Assignee: runzhiwang
>Priority: Major
> Attachments: Ozone Truncate Design-v1.pdf, Ozone Truncate 
> Design-v2.pdf
>
>
> Design: 
> https://docs.google.com/document/d/1Ju9WeuFuf_D8gElRCJH1-as0OyC6TOtHPHErycL43XQ/edit#



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  ..  S2 ..   S3
   P1  ..  P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1 .. S2 .. S3
   P2 ..  P1
   ...P3
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  ..  S2 ..   S3
   P1  ..  P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1 .. S2 .. S3
   ...P2 ..  P1
   P3
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  ..  S2 ..   S3
   P1  ..  P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1 .. S2 .. S3
   ...P2 ..  P1
   ..P3
   
   It's not balance now.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  ..  S2 ..   S3
   P1  ..  P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1 .. S2 .. S3
   ...P2 ..  P1
   ...P3
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  S2  S3
   P1 P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1  S2  S3
   .P2   P1
   ..P3
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1S2S3
   P1P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1  S2  S3
   .P2   P1
   ..P3
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  S2  S3
   P1 P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1  S2  S3
   .P2   P1
   ..P3.
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  S2  S3
   P1 P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1  S2  S3
   ..P2   P1
P3.
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  S2  S3
   P1 P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1  S2  S3
   .P2   P1
...P3.
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490675374



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   @xiaoyuyao Good point, I also have thought this.
   
   > Any performance impact on the pipeline of forcing leader to be the 
original one.
   
   If there is performance problem, I can improve forcing leader change within 
1 second. I already know how to improve it, but has not implemented it.
   
   > Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change.
   
   We can find slow leader by some metric, decrease the priority of the slow 
leader, select one faster datanode and increase it's priority, so the faster 
datanode will grab the leadership from the slow leader.
   
   
   > In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   
   I want the cluster leader distribution as we planned, if the plan is not 
appropriate, we can adjust the plan by change priority.
   
   If the leader distribution totally depends on hardware rather than plan, we 
maybe lost control of the leader distribution. Because the leaderId in scm was 
reported by datanode, it maybe a delayed leaderId. For example, datanode report:
   
   S1  S2  S3
   P1 P2
   
   then P1's leader transfer to S3, but SCM has not received this report, SCM 
allocate P3's leader to S3, then
   
   S1  S2  S3
P2   P1
   P3.
   
   It's not balance now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-17 Thread GitBox


captainzmc commented on pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434#issuecomment-694618870


   hi @ChenSammi @xiaoyuyao, this PR add judge whether we can be write if space 
quota enable. Can you help to review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-17 Thread GitBox


xiaoyuyao commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r490666872



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +105,65 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  private Map getSuggestedLeaderCount(
+  List dns) {
+Map suggestedLeaderCount = new HashMap<>();
+for (DatanodeDetails dn : dns) {
+  suggestedLeaderCount.put(dn, 0);
+
+  Set pipelineIDSet = getNodeManager().getPipelines(dn);
+  for (PipelineID pipelineID : pipelineIDSet) {
+try {
+  Pipeline pipeline = 
getPipelineStateManager().getPipeline(pipelineID);
+  if (!pipeline.isClosed()
+  && dn.getUuid().equals(pipeline.getSuggestedLeaderId())) {

Review comment:
   bq.  then s1 grab the leadership of the first pipeline by RATIS-967,
   
   Does RATIS-967 always gives up its leader when s1 is back online even the 
current leader works fine? I think this is more specific to RATIS-967 wrt. how 
the priority is enforced. Any performance impact on the pipeline of forcing 
leader to be the original one? 
   
   I'm thinking of instead of forcing leader of pipeline P1, P2, P3 like
   S1   S2   S3
   P1   P2
 P3
   
   In the case of S1 temporarily down, why don't we keep P1 leader on S3 and 
create P3 with leader on S1, this gives more flexibility for higher level to 
choose leader?
   S1   S2   S3
  P2
 P1
   P3
   
   Another situation I'm thinking of is writers on pipeline with slow 
leader(e.g., hardware slowness) may not be able to recover by leader change. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1368: HDDS-4156. add hierarchical layout to Chinese doc

2020-09-17 Thread GitBox


xiaoyuyao commented on pull request #1368:
URL: https://github.com/apache/hadoop-ozone/pull/1368#issuecomment-694532324


   LGTM overall. Only have one question: should move the GDPR under the 
Security not Features. This may apply to the original EN document as well. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3981) Add more debug level log to XceiverClientGrpc for debug purpose

2020-09-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-3981.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Add more debug level log to XceiverClientGrpc for debug purpose
> ---
>
> Key: HDDS-3981
> URL: https://issues.apache.org/jira/browse/HDDS-3981
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Affects Versions: 0.5.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #1214: HDDS-3981. Add more debug level log to XceiverClientGrpc for debug purpose

2020-09-17 Thread GitBox


xiaoyuyao merged pull request #1214:
URL: https://github.com/apache/hadoop-ozone/pull/1214


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1214: HDDS-3981. Add more debug level log to XceiverClientGrpc for debug purpose

2020-09-17 Thread GitBox


xiaoyuyao commented on pull request #1214:
URL: https://github.com/apache/hadoop-ozone/pull/1214#issuecomment-694530571


   Thanks @maobaolong  for the update. LGTM, +1. Will merge it shortly. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4255) Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4255.

Fix Version/s: 1.1.0
   Resolution: Implemented

> Remove unused Ant and Jdiff dependency versions
> ---
>
> Key: HDDS-4255
> URL: https://issues.apache.org/jira/browse/HDDS-4255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Versions of Ant and JDiff are not used in ozone project, but we have some 
> version declaration (inherited from the Hadoo parent pom which was used as a 
> base for the main pom.xml).
> As the (unused) ANT version has security issues, I would remove them to avoid 
> any confusion  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4255) Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4255:
---
Labels:   (was: pull-request-available)

> Remove unused Ant and Jdiff dependency versions
> ---
>
> Key: HDDS-4255
> URL: https://issues.apache.org/jira/browse/HDDS-4255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Fix For: 1.1.0
>
>
> Versions of Ant and JDiff are not used in ozone project, but we have some 
> version declaration (inherited from the Hadoo parent pom which was used as a 
> base for the main pom.xml).
> As the (unused) ANT version has security issues, I would remove them to avoid 
> any confusion  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


adoroszlai commented on pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433#issuecomment-694463117


   Thanks @elek for the cleanup.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


adoroszlai merged pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] codecov-commenter commented on pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


codecov-commenter commented on pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433#issuecomment-694459630


   # 
[Codecov](https://codecov.io/gh/apache/hadoop-ozone/pull/1433?src=pr=h1) 
Report
   > Merging 
[#1433](https://codecov.io/gh/apache/hadoop-ozone/pull/1433?src=pr=desc) 
into 
[master](https://codecov.io/gh/apache/hadoop-ozone/commit/9a4cb9e385c9fc95331ff7a0d2dd731e0a74a21c?el=desc)
 will **increase** coverage by `0.10%`.
   > The diff coverage is `n/a`.
   
   [![Impacted file tree 
graph](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/graphs/tree.svg?width=650=150=pr=5YeeptJMby)](https://codecov.io/gh/apache/hadoop-ozone/pull/1433?src=pr=tree)
   
   ```diff
   @@ Coverage Diff  @@
   ## master#1433  +/-   ##
   
   + Coverage 75.11%   75.21%   +0.10% 
   - Complexity1048810499  +11 
   
 Files   990  990  
 Lines 5088550885  
 Branches   4960 4960  
   
   + Hits  3822138275  +54 
   + Misses1028010225  -55 
   - Partials   2384 2385   +1 
   ```
   
   
   | [Impacted 
Files](https://codecov.io/gh/apache/hadoop-ozone/pull/1433?src=pr=tree) | 
Coverage Δ | Complexity Δ | |
   |---|---|---|---|
   | 
[...otocol/commands/RetriableDatanodeEventWatcher.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL296b25lL3Byb3RvY29sL2NvbW1hbmRzL1JldHJpYWJsZURhdGFub2RlRXZlbnRXYXRjaGVyLmphdmE=)
 | `55.55% <0.00%> (-44.45%)` | `3.00% <0.00%> (-1.00%)` | |
   | 
[...apache/hadoop/hdds/server/events/EventWatcher.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvZnJhbWV3b3JrL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3AvaGRkcy9zZXJ2ZXIvZXZlbnRzL0V2ZW50V2F0Y2hlci5qYXZh)
 | `77.77% <0.00%> (-4.17%)` | `14.00% <0.00%> (ø%)` | |
   | 
[...doop/hdds/scm/pipeline/SimplePipelineProvider.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvc2VydmVyLXNjbS9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL2hkZHMvc2NtL3BpcGVsaW5lL1NpbXBsZVBpcGVsaW5lUHJvdmlkZXIuamF2YQ==)
 | `76.00% <0.00%> (-4.00%)` | `4.00% <0.00%> (-1.00%)` | |
   | 
[...va/org/apache/hadoop/ozone/lease/LeaseManager.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29tbW9uL3NyYy9tYWluL2phdmEvb3JnL2FwYWNoZS9oYWRvb3Avb3pvbmUvbGVhc2UvTGVhc2VNYW5hZ2VyLmphdmE=)
 | `90.80% <0.00%> (-2.30%)` | `15.00% <0.00%> (-1.00%)` | |
   | 
[...apache/hadoop/ozone/client/io/KeyOutputStream.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLW96b25lL2NsaWVudC9zcmMvbWFpbi9qYXZhL29yZy9hcGFjaGUvaGFkb29wL296b25lL2NsaWVudC9pby9LZXlPdXRwdXRTdHJlYW0uamF2YQ==)
 | `79.16% <0.00%> (-1.67%)` | `45.00% <0.00%> (-3.00%)` | |
   | 
[...hadoop/ozone/om/ratis/OzoneManagerRatisServer.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9yYXRpcy9Pem9uZU1hbmFnZXJSYXRpc1NlcnZlci5qYXZh)
 | `79.37% <0.00%> (-0.78%)` | `35.00% <0.00%> (-1.00%)` | |
   | 
[.../ozone/container/common/volume/AbstractFuture.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3ZvbHVtZS9BYnN0cmFjdEZ1dHVyZS5qYXZh)
 | `29.87% <0.00%> (-0.52%)` | `19.00% <0.00%> (-1.00%)` | |
   | 
[.../apache/hadoop/ozone/om/OmMetadataManagerImpl.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLW96b25lL296b25lLW1hbmFnZXIvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9vbS9PbU1ldGFkYXRhTWFuYWdlckltcGwuamF2YQ==)
 | `82.87% <0.00%> (ø)` | `100.00% <0.00%> (ø%)` | |
   | 
[...mon/transport/server/ratis/XceiverServerRatis.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIvY29tbW9uL3RyYW5zcG9ydC9zZXJ2ZXIvcmF0aXMvWGNlaXZlclNlcnZlclJhdGlzLmphdmE=)
 | `88.14% <0.00%> (+0.26%)` | `63.00% <0.00%> (+1.00%)` | |
   | 
[...doop/ozone/container/keyvalue/KeyValueHandler.java](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree#diff-aGFkb29wLWhkZHMvY29udGFpbmVyLXNlcnZpY2Uvc3JjL21haW4vamF2YS9vcmcvYXBhY2hlL2hhZG9vcC9vem9uZS9jb250YWluZXIva2V5dmFsdWUvS2V5VmFsdWVIYW5kbGVyLmphdmE=)
 | `67.25% <0.00%> (+0.44%)` | `68.00% <0.00%> (-1.00%)` | :arrow_up: |
   | ... and [15 
more](https://codecov.io/gh/apache/hadoop-ozone/pull/1433/diff?src=pr=tree-more)
 | |
   
   --
   
   [Continue to review full 

[GitHub] [hadoop-ozone] sonarcloud[bot] removed a comment on pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


sonarcloud[bot] removed a comment on pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433#issuecomment-694155477


   Kudos, SonarCloud Quality Gate passed!
   
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 (and [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433
 esolved=false=SECURITY_HOTSPOT) to review)  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
 [22 Code 
Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
   
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_coverage=list)
 [83.5% 
Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_coverage=list)
  
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_duplicated_lines_density=list)
 [0.0% 
Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_duplicated_lines_density=list)
   
The version of Java (1.8.0_232) you 
have used to run this analysis is deprecated and we will stop accepting it from 
October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


sonarcloud[bot] commented on pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433#issuecomment-694459521


   Kudos, SonarCloud Quality Gate passed!
   
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 (and [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433
 esolved=false=SECURITY_HOTSPOT) to review)  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
 [22 Code 
Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
   
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_coverage=list)
 [84.1% 
Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_coverage=list)
  
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_duplicated_lines_density=list)
 [0.0% 
Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_duplicated_lines_density=list)
   
The version of Java (1.8.0_232) you 
have used to run this analysis is deprecated and we will stop accepting it from 
October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4247) Fixed log4j usage in some places

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4247:
---
Labels:   (was: pull-request-available)

> Fixed log4j usage in some places
> 
>
> Key: HDDS-4247
> URL: https://issues.apache.org/jira/browse/HDDS-4247
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Minor
> Fix For: 1.1.0
>
>
> Fixed log4j usage in some places.
> examples, {} is redundant.
> {code:java}
> LOG.error("Error while scrubbing pipelines {}", e);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4247) Fixed log4j usage in some places

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4247.

Resolution: Fixed

> Fixed log4j usage in some places
> 
>
> Key: HDDS-4247
> URL: https://issues.apache.org/jira/browse/HDDS-4247
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Fixed log4j usage in some places.
> examples, {} is redundant.
> {code:java}
> LOG.error("Error while scrubbing pipelines {}", e);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1426: HDDS-4247. Fixed log4j usage in some places

2020-09-17 Thread GitBox


adoroszlai commented on pull request #1426:
URL: https://github.com/apache/hadoop-ozone/pull/1426#issuecomment-694447182


   Thanks @lamber-ken for the contribution.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1426: HDDS-4247. Fixed log4j usage in some places

2020-09-17 Thread GitBox


adoroszlai merged pull request #1426:
URL: https://github.com/apache/hadoop-ozone/pull/1426


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4122) Implement OM Delete Expired Open Key Request and Response

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4122:
-
Labels: pull-request-available  (was: )

> Implement OM Delete Expired Open Key Request and Response
> -
>
> Key: HDDS-4122
> URL: https://issues.apache.org/jira/browse/HDDS-4122
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>  Labels: pull-request-available
>
> Create an OM request and response that allows moving open keys from the open 
> key table to the deleted table in OM HA. The request portion of this 
> operation, which updates the open key table cache, will use a bucket lock.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] errose28 opened a new pull request #1435: HDDS-4122. Implement OM Delete Expired Open Key Request and Response

2020-09-17 Thread GitBox


errose28 opened a new pull request #1435:
URL: https://github.com/apache/hadoop-ozone/pull/1435


   ## What changes were proposed in this pull request?
   
   Implement OM request and response for moving keys from the open key table to 
the deleted table. These will be used as part of parent jira HDDS-4120 to 
implement the open key cleanup service.
   
   ## What is the link to the Apache JIRA
   
   HDDS-4122
   
   ## How was this patch tested?
   
   Unit tests were added for the new OMRequest and OMResponse classes.
   
   ## Notes
   
   Leaving as draft while I incorporate HDDS-4053 into the OM request and 
response.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4122) Implement OM Delete Expired Open Key Request and Response

2020-09-17 Thread Ethan Rose (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Rose updated HDDS-4122:
-
Description: 
Create an OM request and response that allows moving open keys from the open 
key table to the deleted table in OM HA. The request portion of this operation, 
which updates the open key table cache, will use a bucket lock.

 

  was:Implement the deleteExpiredOpenKey method in KeyManagerImpl to atomically 
move the key passed as a parameter from the open key table to the deleted keys 
table. This operation be done using a ratis transaction for OM HA, and will be 
done with a bucket lock.


> Implement OM Delete Expired Open Key Request and Response
> -
>
> Key: HDDS-4122
> URL: https://issues.apache.org/jira/browse/HDDS-4122
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>
> Create an OM request and response that allows moving open keys from the open 
> key table to the deleted table in OM HA. The request portion of this 
> operation, which updates the open key table cache, will use a bucket lock.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4122) Implement OM Delete Expired Open Key Request and Response

2020-09-17 Thread Ethan Rose (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Rose updated HDDS-4122:
-
Summary: Implement OM Delete Expired Open Key Request and Response  (was: 
Implement KeyManagerImpl#deleteExpiredOpenKey)

> Implement OM Delete Expired Open Key Request and Response
> -
>
> Key: HDDS-4122
> URL: https://issues.apache.org/jira/browse/HDDS-4122
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: OM HA
>Reporter: Ethan Rose
>Assignee: Ethan Rose
>Priority: Minor
>
> Implement the deleteExpiredOpenKey method in KeyManagerImpl to atomically 
> move the key passed as a parameter from the open key table to the deleted 
> keys table. This operation be done using a ratis transaction for OM HA, and 
> will be done with a bucket lock.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lamber-ken commented on pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


lamber-ken commented on pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433#issuecomment-694359242


   retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4241) Support HADOOP_TOKEN_FILE_LOCATION for Ozone token CLI

2020-09-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao resolved HDDS-4241.
--
Fix Version/s: 1.1.0
   Resolution: Fixed

> Support HADOOP_TOKEN_FILE_LOCATION for Ozone token CLI
> --
>
> Key: HDDS-4241
> URL: https://issues.apache.org/jira/browse/HDDS-4241
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Currently, Ozone token CLI produce token in base64 encode format. This is not 
> compatible with HADOOP_TOKEN_FILE_LOCATION and can't be used directly for 
> Ozone/Hadoop CLI to authenticate. This ticket is opened to persist ozone 
> token in a format that is compatible with HADOOP_TOKEN_FILE_LOCATION along 
> with tests. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4241) Support HADOOP_TOKEN_FILE_LOCATION for Ozone token CLI

2020-09-17 Thread Xiaoyu Yao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HDDS-4241:
-
Fix Version/s: 1.0.1

> Support HADOOP_TOKEN_FILE_LOCATION for Ozone token CLI
> --
>
> Key: HDDS-4241
> URL: https://issues.apache.org/jira/browse/HDDS-4241
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Security
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0, 1.0.1
>
>
> Currently, Ozone token CLI produce token in base64 encode format. This is not 
> compatible with HADOOP_TOKEN_FILE_LOCATION and can't be used directly for 
> Ozone/Hadoop CLI to authenticate. This ticket is opened to persist ozone 
> token in a format that is compatible with HADOOP_TOKEN_FILE_LOCATION along 
> with tests. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #1422: HDDS-4241. Support HADOOP_TOKEN_FILE_LOCATION for Ozone token CLI.

2020-09-17 Thread GitBox


xiaoyuyao commented on pull request #1422:
URL: https://github.com/apache/hadoop-ozone/pull/1422#issuecomment-694324512


   Thanks @adoroszlai for the review. Patch has been merged. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #1422: HDDS-4241. Support HADOOP_TOKEN_FILE_LOCATION for Ozone token CLI.

2020-09-17 Thread GitBox


xiaoyuyao merged pull request #1422:
URL: https://github.com/apache/hadoop-ozone/pull/1422


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4256) Add unit tests for Proto [de]serialization-OM

2020-09-17 Thread Peter Orova (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Orova updated HDDS-4256:
--
Description: 
Create tests for serialization / deserialization methods of the below classes
 * OmBucketArgs
 * OmKeyLocationInfo
 * OmKeyLocationInfoGroup
 * OmMultipartKeyInfo
 * OmOzoneAclMap
 * OzoneAclUtil
 * OzoneFileStatus
 * RepeatedOmKeyInfo
 * S3SecretValue
 * ServiceInfo

  was:Umbrella Jira for adding unit tests for all proto conversions in 
OmClientProtocol. 


> Add unit tests for Proto [de]serialization-OM
> -
>
> Key: HDDS-4256
> URL: https://issues.apache.org/jira/browse/HDDS-4256
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Peter Orova
>Assignee: Peter Orova
>Priority: Major
>
> Create tests for serialization / deserialization methods of the below classes
>  * OmBucketArgs
>  * OmKeyLocationInfo
>  * OmKeyLocationInfoGroup
>  * OmMultipartKeyInfo
>  * OmOzoneAclMap
>  * OzoneAclUtil
>  * OzoneFileStatus
>  * RepeatedOmKeyInfo
>  * S3SecretValue
>  * ServiceInfo



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


captainzmc commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490270995



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -598,6 +628,20 @@ public void setBucketStorageType(
 ozoneManagerClient.setBucketProperty(builder.build());
   }
 
+  @Override
+  public void setBucketQuota(String volumeName, String bucketName,
+  long quotaInCounts, long quotaInBytes) throws IOException {
+HddsClientUtils.verifyResourceName(bucketName);
+verifyQuota(quotaInCounts, quotaInBytes);
+OmBucketArgs.Builder builder = OmBucketArgs.newBuilder();
+builder.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setQuotaInBytes(quotaInBytes)
+.setQuotaInCounts(quotaInCounts);
+ozoneManagerClient.setBucketProperty(builder.build());

Review comment:
   Bucket provides the setBucketProperty method and the corresponding 
request to modify the bucket information. This method already exists and is not 
something I added later. So here I'm reusing the method.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


captainzmc commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490257305



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -439,6 +434,15 @@ public void createBucket(
 verifyVolumeName(volumeName);
 verifyBucketName(bucketName);
 Preconditions.checkNotNull(bucketArgs);
+verifyCountsQuota(bucketArgs.getQuotaInCounts());
+verifySpaceQuota(bucketArgs.getQuotaInBytes());
+
+// When creating buckets using the API, if the user does not specify quota,
+// 0 is passed in by default, which should be set to -1.

Review comment:
   In proto unsigned field can't have negative default value. The logic 
here is the same and I can encapsulate it as a method, which should be better





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


adoroszlai commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490213142



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java
##
@@ -46,6 +47,14 @@
   "false/unspecified indicates otherwise")
   private Boolean isGdprEnforced;
 
+  @Option(names = {"--spaceQuota", "-sq"},

Review comment:
   Short options should be only one character to support option grouping 
(ie. `-sq` should be two separate options, same as `-s -q`).
   
   Long options should not be camel-case, rather lower-case using dash as 
separator.
   
   ```suggestion
 @Option(names = {"--space-quota", "-s"},
   ```

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/UpdateBucketHandler.java
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell.bucket;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * create bucket handler.
+ */
+@Command(name = "update",
+description = "Updates parameter of the buckets")
+public class UpdateBucketHandler extends BucketHandler {
+
+  @Option(names = {"--spaceQuota", "-sq"},
+  description = "Quota in bytes of the newly created volume (eg. 1GB)")

Review comment:
   Please consider creating a [mixin](https://picocli.info/#_mixins) with 
the two quota options to ensure consistency and reduce duplication.  See 
[`ListOptions`](https://github.com/apache/hadoop-ozone/blob/079ee7fc2a223e1251b16b9c42004aa2a27bf0f4/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/ListOptions.java)
 for example, and its usages in 
   
   
https://github.com/apache/hadoop-ozone/blob/079ee7fc2a223e1251b16b9c42004aa2a27bf0f4/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/volume/ListVolumeHandler.java#L51-L52
   
   and
   
   
https://github.com/apache/hadoop-ozone/blob/079ee7fc2a223e1251b16b9c42004aa2a27bf0f4/hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/ListBucketHandler.java#L42-L43
   
   Option descriptions can be generic, without mentioning "newly created 
volume" etc., so they can be applied to create|update volume|bucket.

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -192,6 +192,10 @@ public OMClientResponse 
validateAndUpdateCache(OzoneManager ozoneManager,
 throw new OMException("Bucket already exist", BUCKET_ALREADY_EXISTS);
   }
 
+  //Check quotaInBytes and quotaInCounts to update
+  checkQuotaBytesValid(omVolumeArgs, omBucketInfo);
+  checkQuotaCountsValid(omVolumeArgs, omBucketInfo);

Review comment:
   Argument validity should be checked before acquiring lock, preferably in 
`preExecute`.
   
   Also, please verify that the bucket is not a link, if quota is set in the 
request.  Links cannot have actual content, so they should not have any quota 
defined, similar to encryption:
   
   
https://github.com/apache/hadoop-ozone/blob/079ee7fc2a223e1251b16b9c42004aa2a27bf0f4/hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java#L127-L130

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java
##
@@ -46,6 +47,14 @@
   "false/unspecified indicates otherwise")
   private Boolean isGdprEnforced;
 
+  @Option(names = {"--spaceQuota", "-sq"},
+  description = "Quota in bytes of the newly created bucket (eg. 1GB)")
+  private String quotaInBytes;
+
+  @Option(names = {"--quota", "-q"},

Review comment:
   I would suggest dropping the short option.
   
   ```suggestion
 @Option(names = {"--key-quota"},
   ```

##
File path: 

[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


captainzmc commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490231852



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
##
@@ -57,6 +57,9 @@
   private final String sourceVolume;
   private final String sourceBucket;
 
+  private long quotaInBytes;

Review comment:
   I will change the String in VolumeArgs to long





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4256) Add unit tests for Proto [de]serialization-OM

2020-09-17 Thread Peter Orova (Jira)
Peter Orova created HDDS-4256:
-

 Summary: Add unit tests for Proto [de]serialization-OM
 Key: HDDS-4256
 URL: https://issues.apache.org/jira/browse/HDDS-4256
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Peter Orova


Umbrella Jira for adding unit tests for all proto conversions in 
OmClientProtocol. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4256) Add unit tests for Proto [de]serialization-OM

2020-09-17 Thread Peter Orova (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Orova reassigned HDDS-4256:
-

Assignee: Peter Orova

> Add unit tests for Proto [de]serialization-OM
> -
>
> Key: HDDS-4256
> URL: https://issues.apache.org/jira/browse/HDDS-4256
> Project: Hadoop Distributed Data Store
>  Issue Type: Test
>Reporter: Peter Orova
>Assignee: Peter Orova
>Priority: Major
>
> Umbrella Jira for adding unit tests for all proto conversions in 
> OmClientProtocol. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490192669



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/UpdateBucketHandler.java
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell.bucket;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * create bucket handler.
+ */
+@Command(name = "update",

Review comment:
   update -> setquota,  could you please also change the volume quota 
update command to setquota? 
   
   We also need a remove quota CLI for both volume and bucket.  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490192669



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/UpdateBucketHandler.java
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell.bucket;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * create bucket handler.
+ */
+@Command(name = "update",

Review comment:
   update -> setquota,  could you please also change the volume quota 
update command to setquota





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490192669



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/UpdateBucketHandler.java
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell.bucket;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * create bucket handler.
+ */
+@Command(name = "update",

Review comment:
   update -> setQuota





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490192669



##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/UpdateBucketHandler.java
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+package org.apache.hadoop.ozone.shell.bucket;
+
+import org.apache.hadoop.hdds.client.OzoneQuota;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import picocli.CommandLine.Command;
+import picocli.CommandLine.Option;
+
+import java.io.IOException;
+
+/**
+ * create bucket handler.
+ */
+@Command(name = "update",

Review comment:
   update -> setquota





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


ChenSammi commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490174497



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -494,6 +500,30 @@ private static void verifyBucketName(String bucketName) 
throws OMException {
 }
   }
 
+  private static void verifyCountsQuota(long quota) throws OMException {
+if ((quota < OzoneConsts.QUOTA_RESET)) {
+  throw new IllegalArgumentException("Invalid values for quota : " +
+  "counts quota is :" + quota + ".");
+}
+  }
+
+  private static void verifySpaceQuota(long quota) throws OMException {
+if ((quota < OzoneConsts.QUOTA_RESET)) {
+  throw new IllegalArgumentException("Invalid values for quota : " +
+  "space quota is :" + quota + ".");
+}
+  }
+
+  private static void verifyQuota(long quotaInCounts, long quotaInBytes)

Review comment:
   This verifyQuota has the same function as above two. Suggest choose one 
and remove the other. 

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -439,6 +434,15 @@ public void createBucket(
 verifyVolumeName(volumeName);
 verifyBucketName(bucketName);
 Preconditions.checkNotNull(bucketArgs);
+verifyCountsQuota(bucketArgs.getQuotaInCounts());
+verifySpaceQuota(bucketArgs.getQuotaInBytes());
+
+// When creating buckets using the API, if the user does not specify quota,
+// 0 is passed in by default, which should be set to -1.

Review comment:
   Can we add default -1 value to newly added fields in proto file? 

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/bucket/OMBucketCreateRequest.java
##
@@ -297,4 +301,28 @@ private BucketEncryptionInfoProto getBeinfo(
 CipherSuite.convert(metadata.getCipher(;
 return bekb.build();
   }
+
+  public void checkQuotaBytesValid(OmVolumeArgs omVolumeArgs,
+  OmBucketInfo omBucketInfo) {

Review comment:
   indent

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/BucketArgs.java
##
@@ -57,6 +57,9 @@
   private final String sourceVolume;
   private final String sourceBucket;
 
+  private long quotaInBytes;

Review comment:
   quotaInBytes in VolumeArgs has String as type. Can we unify them? 

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -598,6 +628,20 @@ public void setBucketStorageType(
 ozoneManagerClient.setBucketProperty(builder.build());
   }
 
+  @Override
+  public void setBucketQuota(String volumeName, String bucketName,
+  long quotaInCounts, long quotaInBytes) throws IOException {
+HddsClientUtils.verifyResourceName(bucketName);

Review comment:
   verify volume 

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -598,6 +628,20 @@ public void setBucketStorageType(
 ozoneManagerClient.setBucketProperty(builder.build());
   }
 
+  @Override
+  public void setBucketQuota(String volumeName, String bucketName,
+  long quotaInCounts, long quotaInBytes) throws IOException {
+HddsClientUtils.verifyResourceName(bucketName);
+verifyQuota(quotaInCounts, quotaInBytes);
+OmBucketArgs.Builder builder = OmBucketArgs.newBuilder();
+builder.setVolumeName(volumeName)
+.setBucketName(bucketName)
+.setQuotaInBytes(quotaInBytes)
+.setQuotaInCounts(quotaInCounts);
+ozoneManagerClient.setBucketProperty(builder.build());

Review comment:
   setBucketQuota?  setBucketProperty?

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java
##
@@ -46,6 +47,14 @@
   "false/unspecified indicates otherwise")
   private Boolean isGdprEnforced;
 
+  @Option(names = {"--spaceQuota", "-sq"},
+  description = "Quota in bytes of the newly created bucket (eg. 1GB)")
+  private String quotaInBytes;
+
+  @Option(names = {"--quota", "-q"},

Review comment:
   --quota -> --keyQuota   -q -> -kq

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/UpdateBucketHandler.java
##
@@ -0,0 +1,70 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, 

[GitHub] [hadoop-ozone] maobaolong commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-17 Thread GitBox


maobaolong commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r490169874



##
File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/TestOMRequestUtils.java
##
@@ -283,7 +285,8 @@ public static void addVolumeToDB(String volumeName, String 
ownerName,
 OmVolumeArgs omVolumeArgs =
 OmVolumeArgs.newBuilder().setCreationTime(Time.now())
 .setVolume(volumeName).setAdminName(ownerName)
-.setOwnerName(ownerName).build();
+.setOwnerName(ownerName).setQuotaInBytes(1024 * GB)

Review comment:
   If you want to keep this `QuotaInBytes` and  `QuotaInCounts ` big 
enough, you can set it to Long.MAX_VALUE;

##
File path: 
hadoop-ozone/integration-test/src/test/java/org/apache/hadoop/ozone/client/rpc/TestOzoneRpcClientAbstract.java
##
@@ -269,6 +269,68 @@ public void testVolumeSetOwner() throws IOException {
 proxy.setVolumeOwner(volumeName, ownerName);
   }
 
+  @Test
+  public void testSetBucketQuota() throws IOException {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+store.createVolume(volumeName);
+store.getVolume(volumeName).setQuota(OzoneQuota.parseQuota(
+"10GB", 1L));
+store.getVolume(volumeName).createBucket(bucketName);
+OzoneBucket bucket = store.getVolume(volumeName).getBucket(bucketName);
+
+Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInBytes());
+Assert.assertEquals(OzoneConsts.QUOTA_RESET, bucket.getQuotaInCounts());
+store.getVolume(volumeName).getBucket(bucketName).setQuota(
+OzoneQuota.parseQuota("1GB", 1000L));
+OzoneBucket ozoneBucket = 
store.getVolume(volumeName).getBucket(bucketName);
+Assert.assertEquals(1024 * 1024 * 1024,
+ozoneBucket.getQuotaInBytes());
+Assert.assertEquals(1000L, ozoneBucket.getQuotaInCounts());
+  }
+
+  @Test
+  public void testSetBucketQuotaIllegal() throws IOException {
+String volumeName = UUID.randomUUID().toString();
+String bucketName = UUID.randomUUID().toString();
+store.createVolume(volumeName);
+store.getVolume(volumeName).createBucket(bucketName);
+
+try {
+  store.getVolume(volumeName).getBucket(bucketName).setQuota(
+  OzoneQuota.parseQuota("1GB", -100L));
+} catch (IllegalArgumentException ex) {
+  GenericTestUtils.assertExceptionContains(
+  "Invalid values for quota", ex);
+}
+// The unit should be legal.
+try {
+  store.getVolume(volumeName).getBucket(bucketName).setQuota(
+  OzoneQuota.parseQuota("1TEST", 100L));
+} catch (IllegalArgumentException ex) {
+  GenericTestUtils.assertExceptionContains(
+  "Invalid values for quota", ex);
+}
+
+// The setting value cannot be greater than LONG.MAX_VALUE BYTES.
+try {
+  store.getVolume(volumeName).getBucket(bucketName).setQuota(
+  OzoneQuota.parseQuota("9GB", 100L));

Review comment:
   You can use "9223372036854775808" here, it stand for the Long.MAX_VALUE 
+ 1





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #1369: HDDS-4104. Provide a way to get the default value and key of java-based-configuration easily

2020-09-17 Thread GitBox


maobaolong commented on pull request #1369:
URL: https://github.com/apache/hadoop-ozone/pull/1369#issuecomment-694162379


   @adoroszlai Thanks you merge this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] llemec commented on pull request #1425: HDDS-2981 Add unit tests for Proto [de]serialization

2020-09-17 Thread GitBox


llemec commented on pull request #1425:
URL: https://github.com/apache/hadoop-ozone/pull/1425#issuecomment-694161427


   Hello @fapifta,
   
   Thank you for reviewing the diff. Addressing your points:
   
   The testgetFromProtobufOneMetadataOneAcl() indeed only checks the presence 
of metadata and acl because looking at OmPrefixInfo#getFromProtobuf(), those 
parts of the message are included by calling the KeyValueUtil#getFromProtobuf() 
and OzoneAclUtil#fromProtobuf() methods respectively. To test the contents of 
the Acls and Metadata would mean in fact to test these methods implicitly. I 
propose we do those explicitly and in a separate jira.
   By the same token with testGetProtobuf(), I propose we test the getProtobuf 
methods of KeyValueUtil and OzoneAclUtil separately and explicitly. 
   In the explicit tests I agree, we should check the corner cases for both 
metadata and acls
   Following this train of thought it is visible that Protobuf objects will be 
necessary for testing the different protos in Om. That is the proposed role of 
TestInstanceHelper - true, at the moment it only contains the helper methods 
for OmPrefixInfo testing.
   
   Please let me know your thoughts on this.
   
   Thank you,
   
   ps: empty lines removal: sure.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sonarcloud[bot] commented on pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


sonarcloud[bot] commented on pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433#issuecomment-694155477


   Kudos, SonarCloud Quality Gate passed!
   
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
 [0 
Bugs](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=BUG)
  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 [0 
Vulnerabilities](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=VULNERABILITY)
 (and [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=SECURITY_HOTSPOT)
 [0 Security 
Hotspots](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433
 esolved=false=SECURITY_HOTSPOT) to review)  
   [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
 [](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
 [22 Code 
Smells](https://sonarcloud.io/project/issues?id=hadoop-ozone=1433=false=CODE_SMELL)
   
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_coverage=list)
 [83.5% 
Coverage](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_coverage=list)
  
   [](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_duplicated_lines_density=list)
 [0.0% 
Duplication](https://sonarcloud.io/component_measures?id=hadoop-ozone=1433=new_duplicated_lines_density=list)
   
The version of Java (1.8.0_232) you 
have used to run this analysis is deprecated and we will stop accepting it from 
October 2020. Please update to at least Java 11.
   Read more [here](https://sonarcloud.io/documentation/upcoming/)
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4250) Fix wrong logger name

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-4250:
--

Assignee: Xie Lei

> Fix wrong logger name
> -
>
> Key: HDDS-4250
> URL: https://issues.apache.org/jira/browse/HDDS-4250
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.1.0
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Fix wrong logger name, the logger name doesn't match the class name.
> example
> {code:java}
> public class OMBucketSetAclRequest extends OMBucketAclRequest {
>   private static final Logger LOG =
>   LoggerFactory.getLogger(OMBucketAddAclRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4247) Fixed log4j usage in some places

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai reassigned HDDS-4247:
--

Assignee: Xie Lei

> Fixed log4j usage in some places
> 
>
> Key: HDDS-4247
> URL: https://issues.apache.org/jira/browse/HDDS-4247
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 1.0.0
>Reporter: Xie Lei
>Assignee: Xie Lei
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Fixed log4j usage in some places.
> examples, {} is redundant.
> {code:java}
> LOG.error("Error while scrubbing pipelines {}", e);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1363: HDDS-3805. [OFS] Remove usage of OzoneClientAdapter interface

2020-09-17 Thread GitBox


elek commented on pull request #1363:
URL: https://github.com/apache/hadoop-ozone/pull/1363#issuecomment-694144300


   > I'm thinking of renaming BasicRootedOzoneClientAdapterImpl to 
BasicRootedOzoneFileSystemHelper. This way the class name should make more 
sense?
   
   Definitely better, IMHO. It's not clear why we need to move out some 
functions to a helper class (`"Can you please explain what are the differences 
between the two classes and the responsibilities?"`), but I can live with it, 
just to merge the patch earlier.
   
   > The class is dealing with OzoneFSStorageStatistics, but not used anywhere. 
The o3fs counterpart is OzoneClientAdapterImpl, which is used in 
OzoneFileSystem.
   
   This is only about the statistics. Independent to what type of classes do 
you have: 
   
You need a `OzoneFSStorageStatistics storageStatistics` which is updated 
for each of the operations, but only for Hadoop3 (!!!).
   
   For the old-school `o3fs` it's done by subclasses:
   
* Default implementation is hadoop2 compatibility as `incdementCounter` 
does nothing
* non-Basic implementation increments a real counter.
   
   As far as I see we already have this logic for `ofs` as the imlementation of 
`RootedOzoneFileSystem` in `hadoop2` and `hadoop3` projects are different. 
(Later one update the statistics.)
   
   If your helper class can be used from both project without problem, you 
don't need to create two helper classses for the two use-cases. 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek edited a comment on pull request #1363: HDDS-3805. [OFS] Remove usage of OzoneClientAdapter interface

2020-09-17 Thread GitBox


elek edited a comment on pull request #1363:
URL: https://github.com/apache/hadoop-ozone/pull/1363#issuecomment-694144300


   > I'm thinking of renaming BasicRootedOzoneClientAdapterImpl to 
BasicRootedOzoneFileSystemHelper. This way the class name should make more 
sense?
   
   Definitely better, IMHO. It's not clear why we need to move out some 
functions to a helper class (`"Can you please explain what are the differences 
between the two classes and the responsibilities?"`), but I can live with it, 
just to merge the patch earlier (not a big deal).
   
   > The class is dealing with OzoneFSStorageStatistics, but not used anywhere. 
The o3fs counterpart is OzoneClientAdapterImpl, which is used in 
OzoneFileSystem.
   
   This is only about the statistics. Independent to what type of classes do 
you have: 
   
You need a `OzoneFSStorageStatistics storageStatistics` which is updated 
for each of the operations, but only for Hadoop3 (!!!).
   
   For the old-school `o3fs` it's done by subclasses:
   
* Default implementation is hadoop2 compatibility as `incdementCounter` 
does nothing
* non-Basic implementation increments a real counter.
   
   As far as I see we already have this logic for `ofs` as the imlementation of 
`RootedOzoneFileSystem` in `hadoop2` and `hadoop3` projects are different. 
(Later one update the statistics.)
   
   If your helper class can be used from both project without problem, you 
don't need to create two helper classses for the two use-cases. 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #1418: HDDS-4209. S3A Filesystem does not work with Ozone S3.

2020-09-17 Thread GitBox


elek commented on pull request #1418:
URL: https://github.com/apache/hadoop-ozone/pull/1418#issuecomment-694135650


   > My reasoning to take this approach is once HDDS-2939 comes in Ozone 
directory and key are not distinguished with trailing "/". So, using putObject 
when length is zero might not be a correct solution in OM, as the entries will 
be still created in keyTable. For this, if we want to go this route, then might 
be if ending with "/" and size is zero, in putObject we should create an entry 
in the directory table.
   
   Thanks to explain it @bharatviswa504  I re-read the HDDS-2939 spec, and it's 
not clear how the 100% compatibility (in case 
`OZONE_OM_ENABLE_FILESYSTEM_PATHS=false`) can be achieved with prefixes if we 
don't create key entries for all the keys.
   
   Can I create both `/a/b/c/d` and `/a/b/c/d/` keys with HDDS-2939 
(`OZONE_OM_ENABLE_FILESYSTEM_PATHS=false`)?
   
(cc @rakeshadr, @linyiqun)
   
   > In this specific case, intermediate directories will be created even if 
OZONE_OM_ENABLE_FILESYSTEM_PATHS is not enabled. I created HDDS-4238 to make it 
more visible.
   
   This is still a problem. I think it's easier to fix with adjusting 
normalization to handle this case. HDDS-2939 seems to have bigger problems 
which should be solved before the merge, until that we can quickly fix this 
problem.
   
   But if you have any other proposed solution for the mentioned problem 
(`intermediate directories will be created even if 
OZONE_OM_ENABLE_FILESYSTEM_PATHS is not enabled`), please let me know.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3727) Volume space: check quotaUsageInBytes when write key

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3727:
-
Labels: pull-request-available  (was: )

> Volume space: check quotaUsageInBytes when write key
> 
>
> Key: HDDS-3727
> URL: https://issues.apache.org/jira/browse/HDDS-3727
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc opened a new pull request #1434: HDDS-3727. Volume space: check quotaUsageInBytes when write key.

2020-09-17 Thread GitBox


captainzmc opened a new pull request #1434:
URL: https://github.com/apache/hadoop-ozone/pull/1434


   ## What changes were proposed in this pull request?
   
   In addition, the current Quota setting does not take effect. HDDS-541 gives 
all the work needed to perfect Quota.
   This PR is a subtask of HDDS-541.
   
   Volume has implemented increase usedBytes when write, and this is based on 
HDDS-4053.
   In this PR we judge whether the Volume can be written when we write the key 
if the volume space quota enable.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3727
   
   ## How was this patch tested?
   
   UT added.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4104) Provide a way to get the default value and key of java-based-configuration easily

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4104:
-
Labels: pull-request-available  (was: )

> Provide a way to get the default value and key of java-based-configuration 
> easily
> -
>
> Key: HDDS-4104
> URL: https://issues.apache.org/jira/browse/HDDS-4104
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Affects Versions: 1.0.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> - getDefaultValue
> - getKeyName



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1369: HDDS-4104. Provide a way to get the default value and key of java-based-configuration easily

2020-09-17 Thread GitBox


adoroszlai commented on pull request #1369:
URL: https://github.com/apache/hadoop-ozone/pull/1369#issuecomment-694112238


   Thanks @maobaolong for the contribution.  Merged to master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4104) Provide a way to get the default value and key of java-based-configuration easily

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4104:
---
Labels:   (was: pull-request-available)

> Provide a way to get the default value and key of java-based-configuration 
> easily
> -
>
> Key: HDDS-4104
> URL: https://issues.apache.org/jira/browse/HDDS-4104
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Affects Versions: 1.0.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
> Fix For: 1.1.0
>
>
> - getDefaultValue
> - getKeyName



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4104) Provide a way to get the default value and key of java-based-configuration easily

2020-09-17 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4104.

Fix Version/s: 1.1.0
   Resolution: Implemented

> Provide a way to get the default value and key of java-based-configuration 
> easily
> -
>
> Key: HDDS-4104
> URL: https://issues.apache.org/jira/browse/HDDS-4104
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Affects Versions: 1.0.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> - getDefaultValue
> - getKeyName



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1369: HDDS-4104. Provide a way to get the default value and key of java-based-configuration easily

2020-09-17 Thread GitBox


adoroszlai merged pull request #1369:
URL: https://github.com/apache/hadoop-ozone/pull/1369


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #1433: HDDS-4255. Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread GitBox


elek opened a new pull request #1433:
URL: https://github.com/apache/hadoop-ozone/pull/1433


   ## What changes were proposed in this pull request?
   
   Versions of Ant and JDiff are not used in ozone project, but we have some 
version declaration (inherited from the Hadoo parent pom which was used as a 
base for the main pom.xml).
   
   As the (unused) ANT version has security issues, I would remove them to 
avoid any confusion
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4255
   
   ## How was this patch tested?
   
   Clean build + `rg org.apache.ant pom.xml`



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4255) Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4255:
-
Labels: pull-request-available  (was: )

> Remove unused Ant and Jdiff dependency versions
> ---
>
> Key: HDDS-4255
> URL: https://issues.apache.org/jira/browse/HDDS-4255
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> Versions of Ant and JDiff are not used in ozone project, but we have some 
> version declaration (inherited from the Hadoo parent pom which was used as a 
> base for the main pom.xml).
> As the (unused) ANT version has security issues, I would remove them to avoid 
> any confusion  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4255) Remove unused Ant and Jdiff dependency versions

2020-09-17 Thread Marton Elek (Jira)
Marton Elek created HDDS-4255:
-

 Summary: Remove unused Ant and Jdiff dependency versions
 Key: HDDS-4255
 URL: https://issues.apache.org/jira/browse/HDDS-4255
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Marton Elek
Assignee: Marton Elek


Versions of Ant and JDiff are not used in ozone project, but we have some 
version declaration (inherited from the Hadoo parent pom which was used as a 
base for the main pom.xml).

As the (unused) ANT version has security issues, I would remove them to avoid 
any confusion  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-17 Thread GitBox


linyiqun commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r490064398



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerStarter.java
##
@@ -98,6 +98,28 @@ public void initOm()
 }
   }
 
+
+  /**
+   * This function implements a sub-command to allow the OM to be
+   * "prepared for upgrade".
+   */
+  @CommandLine.Command(name = "--prepareForUpgrade",
+  aliases = {"--prepareForDowngrade", "--flushTransactions"},

Review comment:
   Will prepareForUpgrade command send multiple OMs simultaneously here? Or 
we should trigger prepareForUpgrade command for each OM service. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-17 Thread GitBox


linyiqun commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r490048885



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1179,15 +1229,22 @@ public void start() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-omRpcServer.start();
-isOmRpcServerRunning = true;
 
+if (!prepareForUpgrade) {
+  omRpcServer.start();
+  isOmRpcServerRunning = true;
+}

Review comment:
   During prepareForUpgrade, the RPC server is not stared. So we should 
also have the corresponding command to trigger to restart RPC server. Otherwise 
after all txns applied, new requests still cannot get in.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-17 Thread GitBox


linyiqun commented on a change in pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#discussion_r490045948



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -994,6 +1005,45 @@ public static boolean omInit(OzoneConfiguration conf) 
throws IOException,
 }
   }
 
+  public boolean applyAllPendingTransactions()
+  throws InterruptedException, IOException {
+
+if (!isRatisEnabled) {
+  LOG.info("Ratis not enabled. Nothing to do.");
+  return true;
+}
+
+String purgeConfig = omRatisServer.getServer()
+.getProperties().get(PURGE_UPTO_SNAPSHOT_INDEX_KEY);
+if (!Boolean.parseBoolean(purgeConfig)) {
+  throw new IllegalStateException("Cannot prepare OM for Upgrade since  " +
+  "raft.server.log.purge.upto.snapshot.index is not true");
+}
+
+waitForAllTxnsApplied(omRatisServer.getOmStateMachine(),
+omRatisServer.getRaftGroup(),
+(RaftServerProxy) omRatisServer.getServer(),
+TimeUnit.MINUTES.toSeconds(5));

Review comment:
   Can we make maxTimeToWaitSeconds configurable?

##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
##
@@ -1179,15 +1229,22 @@ public void start() throws IOException {
   // Allow OM to start as Http Server failure is not fatal.
   LOG.error("OM HttpServer failed to start.", ex);
 }
-omRpcServer.start();
-isOmRpcServerRunning = true;
 
+if (!prepareForUpgrade) {
+  omRpcServer.start();
+  isOmRpcServerRunning = true;
+}

Review comment:
   During prepareForUpgrade, the RPC server is not stared. So we should 
also have the corresponding command to trigger to restart RPC server.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-17 Thread GitBox


ChenSammi commented on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-694037951


   After a second thought,  deleting the container record in SCM DB immediately 
while keep it in memory maybe a better and clean choice.  So if there is stale 
container replica, it can be deleted based on in memory information.  And next 
time when SCM start, SCM doesn't need to handle DELETED containers anymore. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1431: HDDS-4254. Bucket space: add usedBytes and update it when create and delete key.

2020-09-17 Thread GitBox


captainzmc commented on pull request #1431:
URL: https://github.com/apache/hadoop-ozone/pull/1431#issuecomment-693957270


   hi @ChenSammi @xiaoyuyao, this PR is base on #1296. Can you help to review 
this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org