[jira] [Created] (HDDS-2455) Implement MiniOzoneHAClusterImpl#getOMLeader

2019-11-09 Thread Siyao Meng (Jira)
Siyao Meng created HDDS-2455:


 Summary: Implement MiniOzoneHAClusterImpl#getOMLeader
 Key: HDDS-2455
 URL: https://issues.apache.org/jira/browse/HDDS-2455
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Siyao Meng
Assignee: Siyao Meng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2455) Implement MiniOzoneHAClusterImpl#getOMLeader

2019-11-09 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-2455:
-
Description: Implement MiniOzoneHAClusterImpl#getOMLeader and use it.

> Implement MiniOzoneHAClusterImpl#getOMLeader
> 
>
> Key: HDDS-2455
> URL: https://issues.apache.org/jira/browse/HDDS-2455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> Implement MiniOzoneHAClusterImpl#getOMLeader and use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2455) Implement MiniOzoneHAClusterImpl#getOMLeader

2019-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2455?focusedWorklogId=340914&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-340914
 ]

ASF GitHub Bot logged work on HDDS-2455:


Author: ASF GitHub Bot
Created on: 09/Nov/19 12:45
Start Date: 09/Nov/19 12:45
Worklog Time Spent: 10m 
  Work Description: smengcl commented on pull request #137: HDDS-2455. 
Implement MiniOzoneHAClusterImpl#getOMLeader
URL: https://github.com/apache/hadoop-ozone/pull/137
 
 
   ## What changes were proposed in this pull request?
   
   Implement MiniOzoneHAClusterImpl#getOMLeader
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2455
   
   ## How was this patch tested?
   
   1. Added a new test class
   2. Ran the two previous unit tests that I have changed to use the new 
method, both passed: `TestOzoneFsHAURLs#testWithQualifiedDefaultFS` and 
`TestOzoneShellHA#testOzoneShCmdURIs`.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 340914)
Remaining Estimate: 0h
Time Spent: 10m

> Implement MiniOzoneHAClusterImpl#getOMLeader
> 
>
> Key: HDDS-2455
> URL: https://issues.apache.org/jira/browse/HDDS-2455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Implement MiniOzoneHAClusterImpl#getOMLeader and use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2455) Implement MiniOzoneHAClusterImpl#getOMLeader

2019-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2455:
-
Labels: pull-request-available  (was: )

> Implement MiniOzoneHAClusterImpl#getOMLeader
> 
>
> Key: HDDS-2455
> URL: https://issues.apache.org/jira/browse/HDDS-2455
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> Implement MiniOzoneHAClusterImpl#getOMLeader and use it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-09 Thread Fei Hui (Jira)
Fei Hui created HDFS-14975:
--

 Summary: Add CR for SetECPolicyCommand usage
 Key: HDFS-14975
 URL: https://issues.apache.org/jira/browse/HDFS-14975
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.3.0
Reporter: Fei Hui


*bin/hdfs ec -help* output the following message
{quote}
[-listPolicies]

Get the list of all erasure coding policies.

[-addPolicies -policyFile ]

Add a list of user defined erasure coding policies.
  The path of the xml file which defines the EC policies to add 

[-getPolicy -path ]

Get the erasure coding policy of a file/directory.

  The path of the file/directory for getting the erasure coding policy 

[-removePolicy -policy ]

Remove an user defined erasure coding policy.
  The name of the erasure coding policy 

[-setPolicy -path  [-policy ] [-replicate]]

Set the erasure coding policy for a file/directory.

  The path of the file/directory to set the erasure coding policy 
The name of the erasure coding policy   
-replicate  force 3x replication scheme on the directory

-replicate and -policy are optional arguments. They cannot been used at the 
same time
[-unsetPolicy -path ]

Unset the erasure coding policy for a directory.

  The path of the directory from which the erasure coding policy will be 
unset. 

[-listCodecs]

Get the list of supported erasure coding codecs and coders.
A coder is an implementation of a codec. A codec can have different 
implementations, thus different coders.
The coders for a codec are listed in a fall back order.

[-enablePolicy -policy ]

Enable the erasure coding policy.

  The name of the erasure coding policy 

[-disablePolicy -policy ]

Disable the erasure coding policy.

  The name of the erasure coding policy 

[-verifyClusterSetup [-policy ...]]

Verify if the cluster setup can support all enabled erasure coding policies. If 
optional parameter -policy is specified, verify if the cluster setup can 
support the given policy.

{quote}
The output format is not so good to users
We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like other 
commands




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-09 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui reassigned HDFS-14975:
--

Assignee: Fei Hui

> Add CR for SetECPolicyCommand usage
> ---
>
> Key: HDFS-14975
> URL: https://issues.apache.org/jira/browse/HDFS-14975
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
>
> *bin/hdfs ec -help* output the following message
> {quote}
> [-listPolicies]
> Get the list of all erasure coding policies.
> [-addPolicies -policyFile ]
> Add a list of user defined erasure coding policies.
>   The path of the xml file which defines the EC policies to add 
> [-getPolicy -path ]
> Get the erasure coding policy of a file/directory.
>   The path of the file/directory for getting the erasure coding policy 
> [-removePolicy -policy ]
> Remove an user defined erasure coding policy.
>   The name of the erasure coding policy 
> [-setPolicy -path  [-policy ] [-replicate]]
> Set the erasure coding policy for a file/directory.
>   The path of the file/directory to set the erasure coding policy 
> The name of the erasure coding policy   
> -replicate  force 3x replication scheme on the directory
> -replicate and -policy are optional arguments. They cannot been used at the 
> same time
> [-unsetPolicy -path ]
> Unset the erasure coding policy for a directory.
>   The path of the directory from which the erasure coding policy will 
> be 
> unset.
>  
> [-listCodecs]
> Get the list of supported erasure coding codecs and coders.
> A coder is an implementation of a codec. A codec can have different 
> implementations, thus different coders.
> The coders for a codec are listed in a fall back order.
> [-enablePolicy -policy ]
> Enable the erasure coding policy.
>   The name of the erasure coding policy 
> [-disablePolicy -policy ]
> Disable the erasure coding policy.
>   The name of the erasure coding policy 
> [-verifyClusterSetup [-policy ...]]
> Verify if the cluster setup can support all enabled erasure coding policies. 
> If optional parameter -policy is specified, verify if the cluster setup can 
> support the given policy.
> {quote}
> The output format is not so good to users
> We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like 
> other commands



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-09 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970832#comment-16970832
 ] 

Fei Hui commented on HDFS-14975:


Upload the simple patch

> Add CR for SetECPolicyCommand usage
> ---
>
> Key: HDFS-14975
> URL: https://issues.apache.org/jira/browse/HDFS-14975
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HDFS-14975.001.patch
>
>
> *bin/hdfs ec -help* output the following message
> {quote}
> [-listPolicies]
> Get the list of all erasure coding policies.
> [-addPolicies -policyFile ]
> Add a list of user defined erasure coding policies.
>   The path of the xml file which defines the EC policies to add 
> [-getPolicy -path ]
> Get the erasure coding policy of a file/directory.
>   The path of the file/directory for getting the erasure coding policy 
> [-removePolicy -policy ]
> Remove an user defined erasure coding policy.
>   The name of the erasure coding policy 
> [-setPolicy -path  [-policy ] [-replicate]]
> Set the erasure coding policy for a file/directory.
>   The path of the file/directory to set the erasure coding policy 
> The name of the erasure coding policy   
> -replicate  force 3x replication scheme on the directory
> -replicate and -policy are optional arguments. They cannot been used at the 
> same time
> [-unsetPolicy -path ]
> Unset the erasure coding policy for a directory.
>   The path of the directory from which the erasure coding policy will 
> be 
> unset.
>  
> [-listCodecs]
> Get the list of supported erasure coding codecs and coders.
> A coder is an implementation of a codec. A codec can have different 
> implementations, thus different coders.
> The coders for a codec are listed in a fall back order.
> [-enablePolicy -policy ]
> Enable the erasure coding policy.
>   The name of the erasure coding policy 
> [-disablePolicy -policy ]
> Disable the erasure coding policy.
>   The name of the erasure coding policy 
> [-verifyClusterSetup [-policy ...]]
> Verify if the cluster setup can support all enabled erasure coding policies. 
> If optional parameter -policy is specified, verify if the cluster setup can 
> support the given policy.
> {quote}
> The output format is not so good to users
> We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like 
> other commands



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-09 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14975:
---
Attachment: HDFS-14975.001.patch

> Add CR for SetECPolicyCommand usage
> ---
>
> Key: HDFS-14975
> URL: https://issues.apache.org/jira/browse/HDFS-14975
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HDFS-14975.001.patch
>
>
> *bin/hdfs ec -help* output the following message
> {quote}
> [-listPolicies]
> Get the list of all erasure coding policies.
> [-addPolicies -policyFile ]
> Add a list of user defined erasure coding policies.
>   The path of the xml file which defines the EC policies to add 
> [-getPolicy -path ]
> Get the erasure coding policy of a file/directory.
>   The path of the file/directory for getting the erasure coding policy 
> [-removePolicy -policy ]
> Remove an user defined erasure coding policy.
>   The name of the erasure coding policy 
> [-setPolicy -path  [-policy ] [-replicate]]
> Set the erasure coding policy for a file/directory.
>   The path of the file/directory to set the erasure coding policy 
> The name of the erasure coding policy   
> -replicate  force 3x replication scheme on the directory
> -replicate and -policy are optional arguments. They cannot been used at the 
> same time
> [-unsetPolicy -path ]
> Unset the erasure coding policy for a directory.
>   The path of the directory from which the erasure coding policy will 
> be 
> unset.
>  
> [-listCodecs]
> Get the list of supported erasure coding codecs and coders.
> A coder is an implementation of a codec. A codec can have different 
> implementations, thus different coders.
> The coders for a codec are listed in a fall back order.
> [-enablePolicy -policy ]
> Enable the erasure coding policy.
>   The name of the erasure coding policy 
> [-disablePolicy -policy ]
> Disable the erasure coding policy.
>   The name of the erasure coding policy 
> [-verifyClusterSetup [-policy ...]]
> Verify if the cluster setup can support all enabled erasure coding policies. 
> If optional parameter -policy is specified, verify if the cluster setup can 
> support the given policy.
> {quote}
> The output format is not so good to users
> We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like 
> other commands



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-09 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14975:
---
Status: Patch Available  (was: Open)

> Add CR for SetECPolicyCommand usage
> ---
>
> Key: HDFS-14975
> URL: https://issues.apache.org/jira/browse/HDFS-14975
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Trivial
> Attachments: HDFS-14975.001.patch
>
>
> *bin/hdfs ec -help* output the following message
> {quote}
> [-listPolicies]
> Get the list of all erasure coding policies.
> [-addPolicies -policyFile ]
> Add a list of user defined erasure coding policies.
>   The path of the xml file which defines the EC policies to add 
> [-getPolicy -path ]
> Get the erasure coding policy of a file/directory.
>   The path of the file/directory for getting the erasure coding policy 
> [-removePolicy -policy ]
> Remove an user defined erasure coding policy.
>   The name of the erasure coding policy 
> [-setPolicy -path  [-policy ] [-replicate]]
> Set the erasure coding policy for a file/directory.
>   The path of the file/directory to set the erasure coding policy 
> The name of the erasure coding policy   
> -replicate  force 3x replication scheme on the directory
> -replicate and -policy are optional arguments. They cannot been used at the 
> same time
> [-unsetPolicy -path ]
> Unset the erasure coding policy for a directory.
>   The path of the directory from which the erasure coding policy will 
> be 
> unset.
>  
> [-listCodecs]
> Get the list of supported erasure coding codecs and coders.
> A coder is an implementation of a codec. A codec can have different 
> implementations, thus different coders.
> The coders for a codec are listed in a fall back order.
> [-enablePolicy -policy ]
> Enable the erasure coding policy.
>   The name of the erasure coding policy 
> [-disablePolicy -policy ]
> Disable the erasure coding policy.
>   The name of the erasure coding policy 
> [-verifyClusterSetup [-policy ...]]
> Verify if the cluster setup can support all enabled erasure coding policies. 
> If optional parameter -policy is specified, verify if the cluster setup can 
> support the given policy.
> {quote}
> The output format is not so good to users
> We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like 
> other commands



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-09 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14975:
---
Description: 
*bin/hdfs ec -help* output the following message
{quote}
[-listPolicies]

Get the list of all erasure coding policies.

[-addPolicies -policyFile ]

Add a list of user defined erasure coding policies.
  The path of the xml file which defines the EC policies to add 

[-getPolicy -path ]

Get the erasure coding policy of a file/directory.

  The path of the file/directory for getting the erasure coding policy 

[-removePolicy -policy ]

Remove an user defined erasure coding policy.
  The name of the erasure coding policy 

[-setPolicy -path  [-policy ] [-replicate]]

Set the erasure coding policy for a file/directory.

  The path of the file/directory to set the erasure coding policy 
The name of the erasure coding policy   
-replicate  force 3x replication scheme on the directory

-replicate and -policy are optional arguments. They cannot been used at the 
same time
[-unsetPolicy -path ]

Unset the erasure coding policy for a directory.

  The path of the directory from which the erasure coding policy will be 
unset. 

[-listCodecs]

Get the list of supported erasure coding codecs and coders.
A coder is an implementation of a codec. A codec can have different 
implementations, thus different coders.
The coders for a codec are listed in a fall back order.

[-enablePolicy -policy ]

Enable the erasure coding policy.

  The name of the erasure coding policy 

[-disablePolicy -policy ]

Disable the erasure coding policy.

  The name of the erasure coding policy 

[-verifyClusterSetup [-policy ...]]

Verify if the cluster setup can support all enabled erasure coding policies. If 
optional parameter -policy is specified, verify if the cluster setup can 
support the given policy.

{quote}
The output format is not so good to users
We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like other 
commands
{quote}
[-setPolicy -path  [-policy ] [-replicate]]

Set the erasure coding policy for a file/directory.

 The path of the file/directory to set the erasure coding policy
 The name of the erasure coding policy
-replicate force 3x replication scheme on the directory

-replicate and -policy are optional arguments. They cannot been used at the 
same time
-here-
[-unsetPolicy -path ]

Unset the erasure coding policy for a directory.

 The path of the directory from which the erasure coding policy will be
unset.

{quote}


  was:
*bin/hdfs ec -help* output the following message
{quote}
[-listPolicies]

Get the list of all erasure coding policies.

[-addPolicies -policyFile ]

Add a list of user defined erasure coding policies.
  The path of the xml file which defines the EC policies to add 

[-getPolicy -path ]

Get the erasure coding policy of a file/directory.

  The path of the file/directory for getting the erasure coding policy 

[-removePolicy -policy ]

Remove an user defined erasure coding policy.
  The name of the erasure coding policy 

[-setPolicy -path  [-policy ] [-replicate]]

Set the erasure coding policy for a file/directory.

  The path of the file/directory to set the erasure coding policy 
The name of the erasure coding policy   
-replicate  force 3x replication scheme on the directory

-replicate and -policy are optional arguments. They cannot been used at the 
same time
[-unsetPolicy -path ]

Unset the erasure coding policy for a directory.

  The path of the directory from which the erasure coding policy will be 
unset. 

[-listCodecs]

Get the list of supported erasure coding codecs and coders.
A coder is an implementation of a codec. A codec can have different 
implementations, thus different coders.
The coders for a codec are listed in a fall back order.

[-enablePolicy -policy ]

Enable the erasure coding policy.

  The name of the erasure coding policy 

[-disablePolicy -policy ]

Disable the erasure coding policy.

  The name of the erasure coding policy 

[-verifyClusterSetup [-policy ...]]

Verify if the cluster setup can support all enabled erasure coding policies. If 
optional parameter -policy is specified, verify if the cluster setup can 
support the given policy.

{quote}
The output format is not so good to users
We should add CR between SetECPolicyCommand and UnsetECPolicyCommand like other 
commands



> Add CR for SetECPolicyCommand usage
> ---
>
> Key: HDFS-14975
> URL: https://issues.apache.org/jira/browse/HDFS-14975
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Report

[jira] [Commented] (HDFS-14852) Remove of LowRedundancyBlocks do NOT remove the block from all queues

2019-11-09 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970838#comment-16970838
 ] 

Fei Hui commented on HDFS-14852:


Ping [~kihwal] [~weichiu]

> Remove of LowRedundancyBlocks do NOT remove the block from all queues
> -
>
> Key: HDFS-14852
> URL: https://issues.apache.org/jira/browse/HDFS-14852
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ec
>Affects Versions: 3.2.0, 3.0.3, 3.1.2, 3.3.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: CorruptBlocksMismatch.png, HDFS-14852.001.patch, 
> HDFS-14852.002.patch, HDFS-14852.003.patch, HDFS-14852.004.patch
>
>
> LowRedundancyBlocks.java
> {code:java}
> // Some comments here
> if(priLevel >= 0 && priLevel < LEVEL
> && priorityQueues.get(priLevel).remove(block)) {
>   NameNode.blockStateChangeLog.debug(
>   "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block {}"
>   + " from priority queue {}",
>   block, priLevel);
>   decrementBlockStat(block, priLevel, oldExpectedReplicas);
>   return true;
> } else {
>   // Try to remove the block from all queues if the block was
>   // not found in the queue for the given priority level.
>   for (int i = 0; i < LEVEL; i++) {
> if (i != priLevel && priorityQueues.get(i).remove(block)) {
>   NameNode.blockStateChangeLog.debug(
>   "BLOCK* NameSystem.LowRedundancyBlock.remove: Removing block" +
>   " {} from priority queue {}", block, i);
>   decrementBlockStat(block, i, oldExpectedReplicas);
>   return true;
> }
>   }
> }
> return false;
>   }
> {code}
> Source code is above, the comments as follow
> {quote}
>   // Try to remove the block from all queues if the block was
>   // not found in the queue for the given priority level.
> {quote}
> The function "remove" does NOT remove the block from all queues.
> Function add from LowRedundancyBlocks.java is used on some places and maybe 
> one block in two or more queues.
> We found that corrupt blocks mismatch corrupt files on NN web UI. Maybe it is 
> related to this.
> Upload initial patch



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14802) The feature of protect directories should be used in RenameOp

2019-11-09 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970849#comment-16970849
 ] 

Fei Hui commented on HDFS-14802:


Ping [~ayushtkn][~ste...@apache.org]
If have time, please take a look and we would move forward. Many thanks!

> The feature of protect directories should be used in RenameOp
> -
>
> Key: HDFS-14802
> URL: https://issues.apache.org/jira/browse/HDFS-14802
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: hdfs
>Affects Versions: 3.0.4, 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HDFS-14802.001.patch, HDFS-14802.002.patch, 
> HDFS-14802.003.patch, HDFS-14802.004.patch
>
>
> Now we could set fs.protected.directories to prevent users from deleting 
> important directories. But users can delete directories around the limitation.
> 1. Rename the directories and delete them.
> 2. move the directories to trash and namenode will delete them.
> So I think we should use the feature of protected directories in RenameOp



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14972) HDFS: fsck "-blockId" option not giving expected output

2019-11-09 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970854#comment-16970854
 ] 

Fei Hui commented on HDFS-14972:


[~SouryakantaDwivedy] 
Thanks for reporting this
Could you please test as bellow
{code}
hdfs fsck /test3/file3 -files -blocks -locations
{code}
Can not be sure that /test3/file3(your test block belongs to) is corrupt from 
your output.

> HDFS: fsck "-blockId" option not giving expected output
> ---
>
> Key: HDFS-14972
> URL: https://issues.apache.org/jira/browse/HDFS-14972
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.1.2
> Environment: HA Cluster
>Reporter: Souryakanta Dwivedy
>Priority: Major
> Attachments: image-2019-11-08-19-10-18-057.png, 
> image-2019-11-08-19-12-21-307.png
>
>
> HDFS: fsck "-blockId" option not giving expected output
> HDFS fsck displaying correct output for corrupted files and blocks 
> !image-2019-11-08-19-10-18-057.png!
>  
>  
> HDFS fsck -blockId command not giving expected output for corrupted replica
>  
> !image-2019-11-08-19-12-21-307.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14720) DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.

2019-11-09 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970878#comment-16970878
 ] 

Hudson commented on HDFS-14720:
---

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17622 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17622/])
HDFS-14720. DataNode shouldn't report block as bad block if the block 
(surendralilhore: rev 320008bb7cc558b1300398178bd2f48cbf0b6c80)
* (edit) 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/ReplicationWork.java


> DataNode shouldn't report block as bad block if the block length is 
> Long.MAX_VALUE.
> ---
>
> Key: HDFS-14720
> URL: https://issues.apache.org/jira/browse/HDFS-14720
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Attachments: HDFS-14720.001.patch, HDFS-14720.002.patch, 
> HDFS-14720.003.patch
>
>
> {noformat}
> 2019-08-11 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807.{noformat}
> If the block length is Long.MAX_VALUE, means file belongs to this block is 
> deleted from the namenode and DN got the command after deletion of file. In 
> this case command should be ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14720) DataNode shouldn't report block as bad block if the block length is Long.MAX_VALUE.

2019-11-09 Thread Surendra Singh Lilhore (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Surendra Singh Lilhore updated HDFS-14720:
--
Fix Version/s: 3.2.2
   3.1.4
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks [~weichiu]  [~hexiaoqiao]  for review.

Thanks [~hemanthboyina]  for contribution.

Committed to trunk, branch-3.2, branch-3.1.

> DataNode shouldn't report block as bad block if the block length is 
> Long.MAX_VALUE.
> ---
>
> Key: HDFS-14720
> URL: https://issues.apache.org/jira/browse/HDFS-14720
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Affects Versions: 3.1.1
>Reporter: Surendra Singh Lilhore
>Assignee: hemanthboyina
>Priority: Major
> Fix For: 3.3.0, 3.1.4, 3.2.2
>
> Attachments: HDFS-14720.001.patch, HDFS-14720.002.patch, 
> HDFS-14720.003.patch
>
>
> {noformat}
> 2019-08-11 09:15:58,092 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Can't replicate block 
> BP-725378529-10.0.0.8-1410027444173:blk_13276745777_1112363330268 because 
> on-disk length 175085 is shorter than NameNode recorded length 
> 9223372036854775807.{noformat}
> If the block length is Long.MAX_VALUE, means file belongs to this block is 
> deleted from the namenode and DN got the command after deletion of file. In 
> this case command should be ignored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14975) Add CR for SetECPolicyCommand usage

2019-11-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14975?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970900#comment-16970900
 ] 

Hadoop QA commented on HDFS-14975:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 28s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 37s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 99m 
28s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}163m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14975 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985425/HDFS-14975.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7128cd6bc9ad 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3d24930 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28284/testReport/ |
| Max. process+thread count | 2686 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28284/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Add CR for SetECPolicyCommand usage
> ---

[jira] [Commented] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970904#comment-16970904
 ] 

Íñigo Goiri commented on HDFS-14974:


Yes, this is only related to tests.
It also has issues if two tests using  run in parallel.

In a regular environment, the cluster admin would just set the proper address 
and port.
BTW, now that I'm seeing this:
{code}
conf.setInt(DFS_ROUTER_HANDLER_COUNT_KEY, 10);
{code}
This is a little overkill for a unit test; something like 2 or 4 threads should 
be more than enough.

> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Priority: Major
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970911#comment-16970911
 ] 

Íñigo Goiri commented on HDFS-14974:


I attached [^HDFS-14974.000.patch] making it a little more general than just 
{{TestRouterSecurityManager}}.

> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14974:
---
Attachment: HDFS-14974.000.patch

> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14974:
---
Status: Patch Available  (was: Open)

> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri reassigned HDFS-14974:
--

Assignee: Íñigo Goiri

> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970921#comment-16970921
 ] 

Hadoop QA commented on HDFS-14974:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 39s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 33s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
13s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 27s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14974 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12985436/HDFS-14974.000.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 8208ed42258c 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 320008b |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_222 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28285/testReport/ |
| Max. process+thread count | 2705 (vs. ulimit of 5500) |
| modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/28285/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>

[jira] [Commented] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970924#comment-16970924
 ] 

Ayush Saxena commented on HDFS-14974:
-

Thanx [~elgoiri]  for the fix. I too prefer being little general rather being 
fixing for one.
Fix LGTM +1

> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14974) RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port

2019-11-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970924#comment-16970924
 ] 

Ayush Saxena edited comment on HDFS-14974 at 11/9/19 7:43 PM:
--

Thanx [~elgoiri] for the fix. I too prefer being little general rather than 
just fixing for one.
 Fix LGTM +1


was (Author: ayushtkn):
Thanx [~elgoiri]  for the fix. I too prefer being little general rather being 
fixing for one.
Fix LGTM +1

> RBF: TestRouterSecurityManager#testCreateCredentials should use :0 for port
> ---
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14974) RBF: Make tests use free ports

2019-11-09 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri updated HDFS-14974:
---
Summary: RBF: Make tests use free ports  (was: RBF: 
TestRouterSecurityManager#testCreateCredentials should use :0 for port)

> RBF: Make tests use free ports
> --
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14974) RBF: Make tests use free ports

2019-11-09 Thread Jira


[ 
https://issues.apache.org/jira/browse/HDFS-14974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970930#comment-16970930
 ] 

Íñigo Goiri commented on HDFS-14974:


I changed the title of the JIRA to make it more general. 

> RBF: Make tests use free ports
> --
>
> Key: HDFS-14974
> URL: https://issues.apache.org/jira/browse/HDFS-14974
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Íñigo Goiri
>Assignee: Íñigo Goiri
>Priority: Major
> Attachments: HDFS-14974.000.patch
>
>
> Currently, {{TestRouterSecurityManager#testCreateCredentials}} create a 
> Router with the default ports. However, these ports might be used. We should 
> set it to :0 for it to be assigned dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2417) Add the list trash command to the client side

2019-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2417:
-
Labels: pull-request-available  (was: )

> Add the list trash command to the client side
> -
>
> Key: HDDS-2417
> URL: https://issues.apache.org/jira/browse/HDDS-2417
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Matthew Sharp
>Priority: Major
>  Labels: pull-request-available
>
> Add the list-trash command to the protobuf files and to the client side 
> translator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2417) Add the list trash command to the client side

2019-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2417?focusedWorklogId=340981&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-340981
 ]

ASF GitHub Bot logged work on HDDS-2417:


Author: ASF GitHub Bot
Created on: 09/Nov/19 22:09
Start Date: 09/Nov/19 22:09
Worklog Time Spent: 10m 
  Work Description: mbsharp commented on pull request #138: HDDS-2417 Add 
the list trash command to the client side
URL: https://github.com/apache/hadoop-ozone/pull/138
 
 
   ## What changes were proposed in this pull request?
   This is the first commit for a new trash feature for Ozone.  This PR tackles 
adding client side changes to support a list trash command which will show 
deleted keys from the deleted keys tables.  
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2417
   
   ## How was this patch tested?
   
   New tests will be added as this is built out further with the core logic.  
For the initial changes in this PR, check style and local mvn build passed.
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 340981)
Remaining Estimate: 0h
Time Spent: 10m

> Add the list trash command to the client side
> -
>
> Key: HDDS-2417
> URL: https://issues.apache.org/jira/browse/HDDS-2417
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: Matthew Sharp
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Add the list-trash command to the protobuf files and to the client side 
> translator.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14648) DeadNodeDetector basic model

2019-11-09 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971007#comment-16971007
 ] 

Yiqun Lin commented on HDFS-14648:
--

The patch overall looks good, but some places seem not readable. I'd like to 
give some minor comments to improve this firstly. Will give further review 
comments since I am still in reviewing.

*ClientContext.java*
 1.'Detect the dead datanodes n advance', should be 'Detect the dead datanodes 
in advance,'
 2.Can we add the comment for these two methods since they frequently be called 
in other places?

*DFSClient.java*
 1.Simplified the comment for method ConcurrentHashMap getDeadNodes(DFSInputStream dfsInputStream)
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, return the dead nodes are detected by
+   * all the DFSInputStreams in the same client. Otherwise return the dead 
nodes
+   * are detected by this DFSInputStream.
+   */
{noformat}
to
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, return the dead nodes that detected by
+   * all the DFSInputStreams in the same client. Otherwise return the dead 
nodes
+   * that detected by given DFSInputStream.
+   */
{noformat}
2.Simplified the comment for method isDeadNode(DFSInputStream dfsInputStream, 
DatanodeInfo datanodeInfo)
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, judgement based on whether this 
datanode
+   * is included or not in DeadNodeDetector#deadnodes. Otherwise judgment based
+   * on whether it is included or not in DFSInputStream#deadnodes.
+   */
{noformat}
to
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, judgement based on whether this 
datanode
+   * is included or not in DeadNodeDetector. Otherwise judgment based given
+   * DFSInputStream.
+   */
{noformat}
3. It will be better to add one additional log here and update the method name.
{code:java}
 
+  /**
+   * Add given datanode in DeadNodeDetector.
+   */
public void addNodeToDeadNodeDetector(DFSInputStream dfsInputStream,
+  DatanodeInfo datanodeInfo) {
+if (!isSharedDeadNodesEnabled()) {
LOG.debug("DeadNode detection is not enabled, skip to add node {}.", 
datanodeInfo);
+  return;
+}
+clientContext.getDeadNodeDetector().addNodeToDetect(dfsInputStream,
+datanodeInfo);
+  }
{code}
4. The similar change for method removeNodeFromDetectByDFSInputStream
{code:java}
+  /**
+   * Remove given datanode from DeadNodeDetector.
+   */
+  public void removeNodeFromDeadNodeDetector(
+  DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) {
+if (!isSharedDeadNodesEnabled()) {
LOG.debug("DeadNode detection is not enabled, skip to remove node {}.", 
datanodeInfo);
+  return;
+}
+clientContext.getDeadNodeDetector()
+.removeNodeFromDetectByDFSInputStream(dfsInputStream, datanodeInfo);
+  }
{code}
5. Update for removeNodeFromDeadNodeDetector
{code:java}
   /**
+   * Remove datanodes that given block placed on from DeadNodeDetector.
+   */
+  public void removeNodeFromDetectByDFSInputStream(
+  DFSInputStream dfsInputStream, LocatedBlocks locatedBlocks) {
+if (!isSharedDeadNodesEnabled() || locatedBlocks == null) {
LOG.debug("DeadNode detection is not enabled or given block is null, 
skip to remove node.",);
+  return;
+}
+for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) {
+  for (DatanodeInfo datanodeInfo : locatedBlock.getLocations()) {
+removeNodeFromDetectByDFSInputStream(dfsInputStream, datanodeInfo);
+  }
+}
+  }
{code}
 * DFSInputStream.java*
 1. Update method name getDfsClient to getDFSClient
{code:java}
+  public DFSClient getDFSClient() {
+return dfsClient;
+  } 
{code}
2. Can we reduce the visibility of these methods? I don't think all of them 
need to be public, protected or private should be enough.
{code:java}
+  public void removeFromLocalDeadNodes(DatanodeInfo dnInfo) {
+deadNodes.remove(dnInfo);
+  }
+
+  public ConcurrentHashMap getLocalDeadNodes() {
+return deadNodes;
+  }
+
+  public void clearLocalDeadNodes() {
+deadNodes.clear();
+  }
+
+  public DFSClient getDfsClient() {
+return dfsClient;
+  }
{code}

 * DeadNodeDetector.java*
 1. Can we add the comment for the sleep time and name?
{code:java}
+  private static final long ERROR_SLEEP_MS = 5000;
+  private static final long IDLE_SLEEP_MS = 1;
+
+  private String name;
{code}
2. Update the log to use the parameter way and 'start' should be 'Start'.
{code:java}
LOG.info("start dead node detector for DFSClient " + this.name);
{code}
to
{code:java}
LOG.info("Start dead node detector for DFSClient {}.", name);
{code}
3.Update the state log
{code:java}
LOG.debug("state " + state);
{code}
to
{code:java}
LOG.debug("Current detector state {}.", state);
{code}

 * StripeReader.java*
 The class doesn't need to do any change since we don't make name change for 
method getDFSClient

[jira] [Comment Edited] (HDFS-14648) DeadNodeDetector basic model

2019-11-09 Thread Yiqun Lin (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971007#comment-16971007
 ] 

Yiqun Lin edited comment on HDFS-14648 at 11/10/19 4:07 AM:


The patch overall looks good, but some places seem not readable. I'd like to 
give some minor comments to improve this firstly. Will give further review 
comments since I am still in reviewing.

*ClientContext.java*
 1.'Detect the dead datanodes n advance', should be 'Detect the dead datanodes 
in advance,'
 2.Can we add the comment for these two methods since they frequently be called 
in other places?

*DFSClient.java*
 1.Simplified the comment for method ConcurrentHashMap getDeadNodes(DFSInputStream dfsInputStream)
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, return the dead nodes are detected by
+   * all the DFSInputStreams in the same client. Otherwise return the dead 
nodes
+   * are detected by this DFSInputStream.
+   */
{noformat}
to
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, return the dead nodes that detected by
+   * all the DFSInputStreams in the same client. Otherwise return the dead 
nodes
+   * that detected by given DFSInputStream.
+   */
{noformat}
2.Simplified the comment for method isDeadNode(DFSInputStream dfsInputStream, 
DatanodeInfo datanodeInfo)
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, judgement based on whether this 
datanode
+   * is included or not in DeadNodeDetector#deadnodes. Otherwise judgment based
+   * on whether it is included or not in DFSInputStream#deadnodes.
+   */
{noformat}
to
{noformat}
+  /**
+   * If sharedDeadNodesEnabled is true, judgement based on whether this 
datanode
+   * is included or not in DeadNodeDetector. Otherwise judgment based given
+   * DFSInputStream.
+   */
{noformat}
3. It will be better to add one additional log here and update the method name.
{code:java}
 
+  /**
+   * Add given datanode in DeadNodeDetector.
+   */
public void addNodeToDeadNodeDetector(DFSInputStream dfsInputStream,
+  DatanodeInfo datanodeInfo) {
+if (!isSharedDeadNodesEnabled()) {
LOG.debug("DeadNode detection is not enabled, skip to add node {}.", 
datanodeInfo);
+  return;
+}
+clientContext.getDeadNodeDetector().addNodeToDetect(dfsInputStream,
+datanodeInfo);
+  }
{code}
4. The similar change for method removeNodeFromDetectByDFSInputStream
{code:java}
+  /**
+   * Remove given datanode from DeadNodeDetector.
+   */
+  public void removeNodeFromDeadNodeDetector(
+  DFSInputStream dfsInputStream, DatanodeInfo datanodeInfo) {
+if (!isSharedDeadNodesEnabled()) {
LOG.debug("DeadNode detection is not enabled, skip to remove node {}.", 
datanodeInfo);
+  return;
+}
+clientContext.getDeadNodeDetector()
+.removeNodeFromDetectByDFSInputStream(dfsInputStream, datanodeInfo);
+  }
{code}
5. Update for removeNodeFromDeadNodeDetector
{code:java}
   /**
+   * Remove datanodes that given block placed on from DeadNodeDetector.
+   */
+  public void removeNodeFromDetectByDFSInputStream(
+  DFSInputStream dfsInputStream, LocatedBlocks locatedBlocks) {
+if (!isSharedDeadNodesEnabled() || locatedBlocks == null) {
LOG.debug("DeadNode detection is not enabled or given block is null, 
skip to remove node.",);
+  return;
+}
+for (LocatedBlock locatedBlock : locatedBlocks.getLocatedBlocks()) {
+  for (DatanodeInfo datanodeInfo : locatedBlock.getLocations()) {
+removeNodeFromDetectByDFSInputStream(dfsInputStream, datanodeInfo);
+  }
+}
+  }
{code}

*DFSInputStream.java*
 1. Update method name getDfsClient to getDFSClient
{code:java}
+  public DFSClient getDFSClient() {
+return dfsClient;
+  } 
{code}
2. Can we reduce the visibility of these methods? I don't think all of them 
need to be public, protected or private should be enough.
{code:java}
+  public void removeFromLocalDeadNodes(DatanodeInfo dnInfo) {
+deadNodes.remove(dnInfo);
+  }
+
+  public ConcurrentHashMap getLocalDeadNodes() {
+return deadNodes;
+  }
+
+  public void clearLocalDeadNodes() {
+deadNodes.clear();
+  }
+
+  public DFSClient getDfsClient() {
+return dfsClient;
+  }
{code}

*DeadNodeDetector.java*
 1. Can we add the comment for the sleep time and name?
{code:java}
+  private static final long ERROR_SLEEP_MS = 5000;
+  private static final long IDLE_SLEEP_MS = 1;
+
+  private String name;
{code}
2. Update the log to use the parameter way and 'start' should be 'Start'.
{code:java}
LOG.info("start dead node detector for DFSClient " + this.name);
{code}
to
{code:java}
LOG.info("Start dead node detector for DFSClient {}.", name);
{code}
3.Update the state log
{code:java}
LOG.debug("state " + state);
{code}
to
{code:java}
LOG.debug("Current detector state {}.", state);
{code}

*StripeReader.java*
 The class doesn't need to do any change since we d

[jira] [Commented] (HDFS-14967) TestWebHDFS - Many test cases are failing in Windows

2019-11-09 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16971013#comment-16971013
 ] 

Ayush Saxena commented on HDFS-14967:
-

Can you handle the checkstyle warnings?

> TestWebHDFS - Many test cases are failing in Windows 
> -
>
> Key: HDFS-14967
> URL: https://issues.apache.org/jira/browse/HDFS-14967
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
> Attachments: HDFS-14967.001.patch
>
>
> In TestWebHDFS test class, few test cases are not closing the MiniDFSCluster, 
> which results in remaining test failures in Windows. Once cluster status is 
> open, all consecutive test cases fail to get the lock on Data dir which 
> results  in test case failure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2423) Add the recover-trash command client side code

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2423:
--

Assignee: YiSheng Lien

> Add the recover-trash command client side code
> --
>
> Key: HDDS-2423
> URL: https://issues.apache.org/jira/browse/HDDS-2423
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> Add protobuf, RpcClient and ClientSideTranslator code for the recover-trash 
> command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2431) Add recover-trash command to the ozone shell.

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2431:
--

Assignee: YiSheng Lien

> Add recover-trash command to the ozone shell.
> -
>
> Key: HDDS-2431
> URL: https://issues.apache.org/jira/browse/HDDS-2431
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone CLI
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> Add recover-trash command to the Ozone CLI.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2425) Support the ability to recover-trash to a new bucket.

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2425:
--

Assignee: YiSheng Lien

> Support the ability to recover-trash to a new bucket.
> -
>
> Key: HDDS-2425
> URL: https://issues.apache.org/jira/browse/HDDS-2425
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> recover-trash can be run to recover to an existing bucket or to a new bucket. 
> If the bucket does not exist, the recover-trash command should create that 
> bucket automatically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2424) Add the recover-trash command server side handling.

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2424:
--

Assignee: YiSheng Lien

> Add the recover-trash command server side handling.
> ---
>
> Key: HDDS-2424
> URL: https://issues.apache.org/jira/browse/HDDS-2424
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> Add the standard server side code for command handling.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2428) Rename a recovered file as .recovered if the file already exists in the target bucket.

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2428:
--

Assignee: YiSheng Lien

> Rename a recovered file as .recovered if the file already exists in the 
> target bucket.
> --
>
> Key: HDDS-2428
> URL: https://issues.apache.org/jira/browse/HDDS-2428
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> During recovery if the file name exists in the bucket, then the new key that 
> is being recovered should be automatically renamed. The proposal is to rename 
> it as key.recovered.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2426) Support recover-trash to an existing bucket.

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2426:
--

Assignee: YiSheng Lien

>  Support recover-trash to an existing bucket.
> -
>
> Key: HDDS-2426
> URL: https://issues.apache.org/jira/browse/HDDS-2426
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> Support recovering trash to an existing bucket. We should also add a config 
> key that prevents this mode, so admins can force the recovery to a new bucket 
> always.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2430) Recover-trash should warn and skip if at-rest encryption is enabled and keys are missing.

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2430:
--

Assignee: YiSheng Lien

> Recover-trash should warn and skip if at-rest encryption is enabled and keys 
> are missing.
> -
>
> Key: HDDS-2430
> URL: https://issues.apache.org/jira/browse/HDDS-2430
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> If TDE is enabled, recovering a key is useful only if the actual keys that 
> are used for encryption are still recoverable. We should warn and fail the 
> recovery if the actual keys are missing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2429) Recover-trash should warn and skip if the key is GDPR-ed key that recovery is pointless since the encryption keys are lost.

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2429:
--

Assignee: YiSheng Lien

> Recover-trash should warn and skip if the key is GDPR-ed key that recovery is 
> pointless since the encryption keys are lost.
> ---
>
> Key: HDDS-2429
> URL: https://issues.apache.org/jira/browse/HDDS-2429
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> If a bucket has GDPR enabled set, then it means that keys used to recover the 
> data from the blocks is irrecoverably lost. In that case, a recover from 
> trash is pointless. The recover-trash command should detect this case and let 
> the users know about it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2432) Add documentation for the recover-trash

2019-11-09 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien reassigned HDDS-2432:
--

Assignee: YiSheng Lien

> Add documentation for the recover-trash
> ---
>
> Key: HDDS-2432
> URL: https://issues.apache.org/jira/browse/HDDS-2432
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Anu Engineer
>Assignee: YiSheng Lien
>Priority: Major
>
> Add documentation for the recover-trash command in Ozone Documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org