[jira] [Updated] (HDDS-4253) SCM changes to process Layout Info in register request/response

2020-09-29 Thread Prashant Pogde (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prashant Pogde updated HDDS-4253:
-
Summary: SCM changes to process Layout Info in register request/response  
(was: SCM changes to process Layout Info in heartbeat request/response)

> SCM changes to process Layout Info in register request/response
> ---
>
> Key: HDDS-4253
> URL: https://issues.apache.org/jira/browse/HDDS-4253
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Prashant Pogde
>Assignee: Prashant Pogde
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4296) SCM changes to process Layout Info in heartbeat request/response

2020-09-29 Thread Prashant Pogde (Jira)
Prashant Pogde created HDDS-4296:


 Summary: SCM changes to process Layout Info in heartbeat 
request/response
 Key: HDDS-4296
 URL: https://issues.apache.org/jira/browse/HDDS-4296
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Prashant Pogde
Assignee: Prashant Pogde






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on pull request #1399: HDDS-3684. Add tests for replication annotation

2020-09-29 Thread GitBox


amaliujia commented on pull request #1399:
URL: https://github.com/apache/hadoop-ozone/pull/1399#issuecomment-701150039


   @timmylicheng  
   
   conflicts resolved.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-09-29 Thread GitBox


ChenSammi commented on pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#issuecomment-701144316


   @adoroszlai  thanks for identify this performance issue.  I'm not quite 
familar with tokens, since it's a user token, suppose it has a much longer 
liveness than input and output stream object instance,  right? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4106) Volume space: Supports clearing spaceQuota

2020-09-29 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao resolved HDDS-4106.
-
Fix Version/s: 1.1.0
   Resolution: Fixed

This has been fixed by HDDS-3751

> Volume space: Supports clearing spaceQuota
> --
>
> Key: HDDS-4106
> URL: https://issues.apache.org/jira/browse/HDDS-4106
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
> Fix For: 1.1.0
>
>
> Volume space quota supports deleting spaceQuota.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4105) Bucket space: Supports clearing spaceQuota

2020-09-29 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao resolved HDDS-4105.
-
Resolution: Fixed

This has been fixed by HDDS-3751

> Bucket space: Supports clearing spaceQuota
> --
>
> Key: HDDS-4105
> URL: https://issues.apache.org/jira/browse/HDDS-4105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
> Fix For: 1.1.0
>
>
> Bucket space quota supports deleting spaceQuota.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4105) Bucket space: Supports clearing spaceQuota

2020-09-29 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao updated HDDS-4105:

Fix Version/s: 1.1.0

> Bucket space: Supports clearing spaceQuota
> --
>
> Key: HDDS-4105
> URL: https://issues.apache.org/jira/browse/HDDS-4105
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
> Fix For: 1.1.0
>
>
> Bucket space quota supports deleting spaceQuota.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3751) Ozone sh bucket client support quota option.

2020-09-29 Thread mingchao zhao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mingchao zhao resolved HDDS-3751.
-
Fix Version/s: 1.1.0
   Resolution: Fixed

pr has been merged

> Ozone sh bucket client support quota option.
> 
>
> Key: HDDS-3751
> URL: https://issues.apache.org/jira/browse/HDDS-3751
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Simon Su
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-29 Thread GitBox


ChenSammi commented on pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#issuecomment-701139655


   Thanks @captainzmc for the contribution and @cxorm @adoroszlai @maobaolong 
for the review. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi merged pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-29 Thread GitBox


ChenSammi merged pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia edited a comment on pull request #1444: HDDS-4242. Copy PrefixInfo proto to new project hadoop-ozone/interface-storage

2020-09-29 Thread GitBox


amaliujia edited a comment on pull request #1444:
URL: https://github.com/apache/hadoop-ozone/pull/1444#issuecomment-700874303


   @elek your suggestion make senses as the new util classes are dedicated to 
be used by `interface-storage`. If it is even dedicated to be used by a `codec` 
class, it can be moved into the `codec` class.
   
   How about let me try to remove util classes in the next PR? 
   
   This PR is to have a working migration for single proto migration to build 
consensus around naming convention, how to deal with helpers/util classes. As 
this PR looks good overall, I will move ~3 proto in each future PR to 
accelerate migrate process while keep each PR be easier to review. 
   
   I can address the util class comment in the next PR with more data points 
there (e.g. 3 more proto migration)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4295) SCM ServiceManager

2020-09-29 Thread Li Cheng (Jira)
Li Cheng created HDDS-4295:
--

 Summary: SCM ServiceManager 
 Key: HDDS-4295
 URL: https://issues.apache.org/jira/browse/HDDS-4295
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
  Components: SCM
Reporter: Li Cheng


SCM ServiceManager is going to control all the SCM background service so that 
they are only serving as the leader. 

ServiceManager also would bootstrap all the background services and protocol 
servers. 

It also needs to do validation steps when the SCM is up as the leader.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3206) Make sure AllocateBlock can only be executed on leader SCM

2020-09-29 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng resolved HDDS-3206.

Resolution: Duplicate

> Make sure AllocateBlock can only be executed on leader SCM
> --
>
> Key: HDDS-3206
> URL: https://issues.apache.org/jira/browse/HDDS-3206
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Priority: Major
>
> Check if the current is leader. If not, return NonLeaderException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3199) Handle PipelineAction and OpenPipline from DN to SCM

2020-09-29 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng resolved HDDS-3199.

Resolution: Duplicate

> Handle PipelineAction and OpenPipline from DN to SCM
> 
>
> Key: HDDS-3199
> URL: https://issues.apache.org/jira/browse/HDDS-3199
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Priority: Major
>
> PipelineAction and OpenPipeline should only sent to leader SCM and leader SCM 
> will take action to close or open pipelines. Pipeline state change will be 
> updated to followers via Ratis. If action is sent to followers, follower SCM 
> will reject with NonLeaderException and DN will retry.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4294) Backport updates from ContainerManager(V1)

2020-09-29 Thread Li Cheng (Jira)
Li Cheng created HDDS-4294:
--

 Summary: Backport updates from ContainerManager(V1)
 Key: HDDS-4294
 URL: https://issues.apache.org/jira/browse/HDDS-4294
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Li Cheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4293) Backport updates from PipelineManager(V1)

2020-09-29 Thread Li Cheng (Jira)
Li Cheng created HDDS-4293:
--

 Summary: Backport updates from PipelineManager(V1)
 Key: HDDS-4293
 URL: https://issues.apache.org/jira/browse/HDDS-4293
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Li Cheng






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3211) Design for SCM HA configuration

2020-09-29 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng updated HDDS-3211:
---
Summary: Design for SCM HA configuration  (was: Make SCM HA configurable)

> Design for SCM HA configuration
> ---
>
> Key: HDDS-3211
> URL: https://issues.apache.org/jira/browse/HDDS-3211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Priority: Major
>
> Need a switch in all path to turn on/off SCM HA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3200) Handle NodeReport from DN to SCMs

2020-09-29 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng resolved HDDS-3200.

Resolution: Duplicate

> Handle NodeReport from DN to SCMs
> -
>
> Key: HDDS-3200
> URL: https://issues.apache.org/jira/browse/HDDS-3200
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Priority: Major
>
> NodeReport sends to all SCMs. Only leader SCM can take action to change node 
> status.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3193) Handle ContainerReport and IncrementalContainerReport

2020-09-29 Thread Li Cheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Cheng resolved HDDS-3193.

Resolution: Duplicate

> Handle ContainerReport and IncrementalContainerReport
> -
>
> Key: HDDS-3193
> URL: https://issues.apache.org/jira/browse/HDDS-3193
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Li Cheng
>Priority: Major
>
> Let DataNode send to all SCMs for contianerReport and 
> IncrementalContainerReport. And SCM should be aware of BSCID in reports to 
> know to version of report. SCM will NOT applyTransaction for container 
> reports. But only record the sequenceId like BCSID in reports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3211) Make SCM HA configurable

2020-09-29 Thread Li Cheng (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17204409#comment-17204409
 ] 

Li Cheng commented on HDDS-3211:


[~nicholasjiang] Hey Nicolas, this issue would require an overall design for 
SCM HA configuration considering multi-scms as well as allowing federation. 
Also this HA config ma apply for entire Ozone, which means we would need to 
update what OM HA does now.

> Make SCM HA configurable
> 
>
> Key: HDDS-3211
> URL: https://issues.apache.org/jira/browse/HDDS-3211
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Reporter: Li Cheng
>Priority: Major
>
> Need a switch in all path to turn on/off SCM HA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta opened a new pull request #1456: Upgrade

2020-09-29 Thread GitBox


fapifta opened a new pull request #1456:
URL: https://github.com/apache/hadoop-ozone/pull/1456


   ## What changes were proposed in this pull request?
   
   Server side implementation of the finalization logic is implemented in this 
pull request.
   The initial idea in the client side implementation is to issue an initiating 
request, then monitor the process with client side polling.
   This is reflected in the server side code, but due to complications, in the 
server side, the background finalization of layoutfeatures one by one, is 
postponed and tracked in HDDS-4286.
   The problem with the backround finalization is that it has to be synced 
between the OMs, and we need to ensure that once requests are in the state 
machine for finalizing a layout feature, the requests processed by the master 
before the finalization requests, and after the finalization requests are in 
sync and in order.
   This means that we need to post separate requests into the state machine 
internally on the leader OM, to have a flow, where the a specific request type 
or change to a request handling, once its activated, it is on all OMs activated 
at the same transaction. We do not really have such a logic for now, and it 
requires some further review. I will post a design doc on this one into 
HDDS-4286 in the near future about possible solutions.
   
   With that, in this patch, the finalization of the features is happening 
inside the initiating UpgradeFinalizeRequest RPC call handling, and it has to 
finish on at least two OMs. So we process the finalization inside the 
statemachine in one batch.
   After this, the client gets a STARTING_FINALIZATION status back, and after 
the 500ms delay on the client side, the client will grab the results.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-4172
   
   ## How was this patch tested?
   
   Quick manual test for the very basic workflow so far.
   JUnit tests to be added to the PR, but wanted to share to have reviews on 
the approach while I am working on the tests.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-09-29 Thread GitBox


linyiqun commented on a change in pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#discussion_r497199494



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
##
@@ -84,7 +86,8 @@ private ContainerProtocolCalls() {
* @throws IOException if there is an I/O error while performing the call
*/
   public static GetBlockResponseProto getBlock(XceiverClientSpi xceiverClient,
-  DatanodeBlockID datanodeBlockID) throws IOException {
+  DatanodeBlockID datanodeBlockID,
+  Collection> tokens) throws IOException {

Review comment:
   @adoroszlai , above description looks good to me. Thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4287) Exclude protobuff classes from ozone-filesystem-hadoop3 jars

2020-09-29 Thread Uma Maheswara Rao G (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uma Maheswara Rao G resolved HDDS-4287.
---
Fix Version/s: 1.1.0
   Resolution: Fixed

Thanks [~bharat] for the review!

> Exclude protobuff classes from ozone-filesystem-hadoop3 jars
> 
>
> Key: HDDS-4287
> URL: https://issues.apache.org/jira/browse/HDDS-4287
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Currently Ozone-filesystem-hadoop3 jar including protobuff classes. We are 
> already keeping the dependency on hadoop jars a prerequisite condition. And 
> hadoop will get the protobuf classes along with it's jars. So, getting 
> protobuff jars again with Ozone-filesystem-hadoop3 jar would be just 
> duplication. So, we can exclude that prootobuff classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] umamaheswararao merged pull request #1455: HDDS-4287: Exclude protobuff classes from ozone-filesystem-hadoop3 jars

2020-09-29 Thread GitBox


umamaheswararao merged pull request #1455:
URL: https://github.com/apache/hadoop-ozone/pull/1455


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1455: HDDS-4287: Exclude protobuff classes from ozone-filesystem-hadoop3 jars

2020-09-29 Thread GitBox


bharatviswa504 commented on pull request #1455:
URL: https://github.com/apache/hadoop-ozone/pull/1455#issuecomment-701064102


   +1 LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4292) OMFailoverProxyProvider.createOMProxyIfNeeded should return a new proxy instance for Hadoop < 3.2

2020-09-29 Thread Vivek Ratnavel Subramanian (Jira)
Vivek Ratnavel Subramanian created HDDS-4292:


 Summary: OMFailoverProxyProvider.createOMProxyIfNeeded should 
return a new proxy instance for Hadoop < 3.2
 Key: HDDS-4292
 URL: https://issues.apache.org/jira/browse/HDDS-4292
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 1.0.0
Reporter: Vivek Ratnavel Subramanian
Assignee: Bharat Viswanadham


HDDS-3560 created new ProxyInfo object in case of IllegalAccessError exception. 
But, it does not return the new instance and causes NPE in Hadoop versions < 3.2


{code:java}
20/09/29 23:10:22 ERROR client.OzoneClientFactory: Couldn't create RpcClient 
protocol exception:20/09/29 23:10:22 ERROR client.OzoneClientFactory: Couldn't 
create RpcClient protocol exception:java.lang.NullPointerException at 
org.apache.hadoop.io.retry.RetryInvocationHandler.isRpcInvocation(RetryInvocationHandler.java:435)
 at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:354)
 at com.sun.proxy.$Proxy10.submitRequest(Unknown Source) at 
org.apache.hadoop.ozone.om.protocolPB.Hadoop3OmTransport.submitRequest(Hadoop3OmTransport.java:89)
 at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.submitRequest(OzoneManagerProtocolClientSideTranslatorPB.java:213)
 at 
org.apache.hadoop.ozone.om.protocolPB.OzoneManagerProtocolClientSideTranslatorPB.getServiceInfo(OzoneManagerProtocolClientSideTranslatorPB.java:1030)
 at org.apache.hadoop.ozone.client.rpc.RpcClient.(RpcClient.java:175) at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getClientProtocol(OzoneClientFactory.java:242)
 at 
org.apache.hadoop.ozone.client.OzoneClientFactory.getRpcClient(OzoneClientFactory.java:113)
 at 
org.apache.hadoop.fs.ozone.BasicOzoneClientAdapterImpl.(BasicOzoneClientAdapterImpl.java:149)
 at 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl.(OzoneClientAdapterImpl.java:51)
 at 
org.apache.hadoop.fs.ozone.OzoneFileSystem.createAdapter(OzoneFileSystem.java:94)
 at 
org.apache.hadoop.fs.ozone.BasicOzoneFileSystem.initialize(BasicOzoneFileSystem.java:161)
 at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288) at 
org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at 
org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337) at 
org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305) at 
org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at 
org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at 
org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:352) at 
org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:250) at 
org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:233) at 
org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:103) at 
org.apache.hadoop.fs.shell.Command.run(Command.java:177) at 
org.apache.hadoop.fs.FsShell.run(FsShell.java:326) at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
org.apache.hadoop.fs.FsShell.main(FsShell.java:389)ls: Couldn't create 
RpcClient protocol
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4287) Exclude protobuff classes from ozone-filesystem-hadoop3 jars

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4287:
-
Labels: pull-request-available  (was: )

> Exclude protobuff classes from ozone-filesystem-hadoop3 jars
> 
>
> Key: HDDS-4287
> URL: https://issues.apache.org/jira/browse/HDDS-4287
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Affects Versions: 1.0.0
>Reporter: Uma Maheswara Rao G
>Assignee: Uma Maheswara Rao G
>Priority: Major
>  Labels: pull-request-available
>
> Currently Ozone-filesystem-hadoop3 jar including protobuff classes. We are 
> already keeping the dependency on hadoop jars a prerequisite condition. And 
> hadoop will get the protobuf classes along with it's jars. So, getting 
> protobuff jars again with Ozone-filesystem-hadoop3 jar would be just 
> duplication. So, we can exclude that prootobuff classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] umamaheswararao opened a new pull request #1455: HDDS-4287: Exclude protobuff classes from ozone-filesystem-hadoop3 jars

2020-09-29 Thread GitBox


umamaheswararao opened a new pull request #1455:
URL: https://github.com/apache/hadoop-ozone/pull/1455


   ## What changes were proposed in this pull request?
   
   Excluded the protobuf classes from hadoop-ozone-filesystem-hadoop3/2. We 
already kept the dependency on hadoop anyway and hadoop provides protobuf jars. 
So, we just excluded the protobuff classes. 
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4287



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 closed pull request #1331: HDDS-4117. Normalize Keypath for listKeys.

2020-09-29 Thread GitBox


bharatviswa504 closed pull request #1331:
URL: https://github.com/apache/hadoop-ozone/pull/1331


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1331: HDDS-4117. Normalize Keypath for listKeys.

2020-09-29 Thread GitBox


bharatviswa504 commented on pull request #1331:
URL: https://github.com/apache/hadoop-ozone/pull/1331#issuecomment-700952276


   Closing this. Opened #1451 to track this issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #1451: HDDS-4117. Normalize Keypath for listKeys.

2020-09-29 Thread GitBox


bharatviswa504 commented on pull request #1451:
URL: https://github.com/apache/hadoop-ozone/pull/1451#issuecomment-700951976


   Rebased and fixed test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on pull request #1453: HDDS-4290. Enable insight point for SCM heartbeat protocol

2020-09-29 Thread GitBox


amaliujia commented on pull request #1453:
URL: https://github.com/apache/hadoop-ozone/pull/1453#issuecomment-700941359


   +1!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel merged pull request #1452: HDDS-4288. the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread GitBox


vivekratnavel merged pull request #1452:
URL: https://github.com/apache/hadoop-ozone/pull/1452


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vivekratnavel commented on pull request #1452: HDDS-4288. the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread GitBox


vivekratnavel commented on pull request #1452:
URL: https://github.com/apache/hadoop-ozone/pull/1452#issuecomment-700910350


   @elek Thanks for working on this! I noticed this weird big logo on the docs 
recently but didn't have the bandwidth to fix it. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] amaliujia commented on pull request #1444: HDDS-4242. Copy PrefixInfo proto to new project hadoop-ozone/interface-storage

2020-09-29 Thread GitBox


amaliujia commented on pull request #1444:
URL: https://github.com/apache/hadoop-ozone/pull/1444#issuecomment-700874303


   @elek your suggestion make senses as the new util classes are dedicated to 
be used by `interface-storage`. If it is even dedicated to be used by a `codec` 
class, it can be moved into the `codec` class.
   
   How about let me try to remove util classes in the next PR? 
   
   This PR is to have a working migration for single proto migration to build 
consensus around naming convention, how to deal with helpers/util classes. As 
this PR looks good overall, I will move ~3 proto in each future PR to 
accelerate migrate process while keep each PR be easier to maintain. 
   
   I can address the util class comment in the next PR with more data points 
there (e.g. 3 more proto migration)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#issuecomment-700860595


   Thanks again @bharatviswa504  for the comments. Uploaded patch addressing 
the comments. Please let me know the feedback!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-09-29 Thread GitBox


adoroszlai commented on a change in pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#discussion_r496889816



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
##
@@ -84,7 +86,8 @@ private ContainerProtocolCalls() {
* @throws IOException if there is an I/O error while performing the call
*/
   public static GetBlockResponseProto getBlock(XceiverClientSpi xceiverClient,
-  DatanodeBlockID datanodeBlockID) throws IOException {
+  DatanodeBlockID datanodeBlockID,
+  Collection> tokens) throws IOException {

Review comment:
   Thanks @linyiqun for the suggestion.  Would
   
   ```java
   @param tokens list of tokens the current user has, possibly including a 
token for this block
   ```
   
   be OK, or do you have a better description for this param?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4227) Implement a "prepareForUpgrade" step that applies all committed transactions onto the OM state machine.

2020-09-29 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4227?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-4227:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Merged PR.

> Implement a "prepareForUpgrade" step that applies all committed transactions 
> onto the OM state machine.
> ---
>
> Key: HDDS-4227
> URL: https://issues.apache.org/jira/browse/HDDS-4227
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: Ozone Manager
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> *Why is this needed?*
> Through HDDS-4143, we have a generic factory to handle multiple versions of 
> apply transaction implementations based on layout version. Hence, this 
> factory can be used to handle versioned requests across layout versions, 
> whenever both the versions need to exist in the code (Let's say for 
> HDDS-2939). 
> However, it has been noticed that the OM ratis requests are still undergoing 
> lot of minor changes (HDDS-4007, HDDS-4007, HDDS-3903), and in these cases it 
> will become hard to maintain 2 versions of the code just to support clean 
> upgrades. 
> Hence, the plan is to build a pre-upgrade utility (client API) that makes 
> sure that an OM instance has no "un-applied" transactions in this Raft log. 
> Invoking this client API makes sure that the upgrade starts with a clean 
> state. Of course, this would be needed only in a HA setup. In a non HA setup, 
> this can either be skipped, or when invoked will be a No-Op (Non Ratis) or 
> cause no harm (Single node Ratis).
> *How does it work?*
> Before updating the software bits, our goal is to get OMs to get to the  
> latest state with respect to apply transaction. The reason we want this is to 
> make sure that the same version of the code executes the AT step in all the 3 
> OMs. In a high level, the flow will be as follows.
> * Before upgrade, *stop* the OMs.
> * Start OMs with a special flag --prepareUpgrade (This is something like 
> --init,  which is a special state which stops the ephemeral OM instance after 
> doing some work)
> * When OM is started with the --prepareUpgrade flag, it does not start the 
> RPC server, so no new requests can get in.
> * In this state, we give every OM time to apply txn until the last txn.
> * We know that at least 2 OMs would have gotten the last client request 
> transaction committed into their log. Hence, those 2 OMs are expected to 
> apply transaction to that index faster.
> * At every OM, the Raft log will be purged after this wait period (so that 
> the replay does not happen), and a Ratis snapshot taken at last txn.
> * Even if there is a lagger OM which is unable to get to last applied txn 
> index, its logs will be purged after the wait time expires.
> * Now when OMs are started with newer version, all the OMs will start using 
> the new code.
> * The lagger OM will get the new Ratis snapshot since there are no logs to 
> replay from.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx merged pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-29 Thread GitBox


avijayanhwx merged pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] avijayanhwx commented on pull request #1430: HDDS-4227. Implement a 'Prepare For Upgrade' step in OM that applies all committed Ratis transactions.

2020-09-29 Thread GitBox


avijayanhwx commented on pull request #1430:
URL: https://github.com/apache/hadoop-ozone/pull/1430#issuecomment-700837411


   Thank you for the reviews @linyiqun, @fapifta & @swagle. Merging this. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


bharatviswa504 commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496831469



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmDirectoryInfo.java
##
@@ -0,0 +1,266 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+
+import java.util.*;
+
+/**
+ * This class represents the directory information by keeping each component
+ * in the user given path and a pointer to its parent directory element in the
+ * path. Also, it stores directory node related metdata details.
+ */
+public class OmDirectoryInfo extends WithObjectID {
+  private long parentObjectID; // pointer to parent directory
+
+  private String name; // directory name
+
+  private long creationTime;
+  private long modificationTime;
+
+  private List acls;
+
+  public OmDirectoryInfo(Builder builder) {
+this.name = builder.name;
+this.acls = builder.acls;
+this.metadata = builder.metadata;
+this.objectID = builder.objectID;
+this.updateID = builder.updateID;
+this.parentObjectID = builder.parentObjectID;
+this.creationTime = builder.creationTime;
+this.modificationTime = builder.modificationTime;
+  }
+
+  /**
+   * Returns new builder class that builds a OmPrefixInfo.
+   *
+   * @return Builder
+   */
+  public static OmDirectoryInfo.Builder newBuilder() {
+return new OmDirectoryInfo.Builder();
+  }
+
+  /**
+   * Builder for Directory Info.
+   */
+  public static class Builder {
+private long parentObjectID; // pointer to parent directory
+
+private long objectID;
+private long updateID;
+
+private String name;
+
+private long creationTime;
+private long modificationTime;
+
+private List acls;
+private Map metadata;
+
+public Builder() {
+  //Default values
+  this.acls = new LinkedList<>();
+  this.metadata = new HashMap<>();
+}
+
+public Builder setParentObjectID(long parentObjectId) {
+  this.parentObjectID = parentObjectId;
+  return this;
+}
+
+public Builder setObjectID(long objectId) {
+  this.objectID = objectId;
+  return this;
+}
+
+public Builder setUpdateID(long updateId) {
+  this.updateID = updateId;
+  return this;
+}
+
+public Builder setName(String dirName) {
+  this.name = dirName;
+  return this;
+}
+
+public Builder setCreationTime(long newCreationTime) {
+  this.creationTime = newCreationTime;
+  return this;
+}
+
+public Builder setModificationTime(long newModificationTime) {
+  this.modificationTime = newModificationTime;
+  return this;
+}
+
+public Builder setAcls(List listOfAcls) {
+  if (listOfAcls != null) {
+this.acls.addAll(listOfAcls);
+  }
+  return this;
+}
+
+public Builder addAcl(OzoneAcl ozoneAcl) {
+  if (ozoneAcl != null) {
+this.acls.add(ozoneAcl);
+  }
+  return this;
+}
+
+public Builder addMetadata(String key, String value) {
+  metadata.put(key, value);
+  return this;
+}
+
+public Builder addAllMetadata(Map additionalMetadata) {
+  if (additionalMetadata != null) {
+metadata.putAll(additionalMetadata);
+  }
+  return this;
+}
+
+public OmDirectoryInfo build() {
+  return new OmDirectoryInfo(this);
+}
+  }
+
+  @Override
+  public String toString() {
+return getObjectID() + ":" + getName();

Review comment:
   As discussed offline, we will print here  `parentID/name:objectID`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: 

[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1454: HDDS-4285. Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-09-29 Thread GitBox


linyiqun commented on a change in pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454#discussion_r496819670



##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/storage/ContainerProtocolCalls.java
##
@@ -84,7 +86,8 @@ private ContainerProtocolCalls() {
* @throws IOException if there is an I/O error while performing the call
*/
   public static GetBlockResponseProto getBlock(XceiverClientSpi xceiverClient,
-  DatanodeBlockID datanodeBlockID) throws IOException {
+  DatanodeBlockID datanodeBlockID,
+  Collection> tokens) throws IOException {

Review comment:
   Can we update the javadoc comment of this method as we new added param 
tokens here?
   The same suggestion for methods we changed below:
   
   * putBlockAsync
   * readChunk
   * writeChunkAsync
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496824181



##
File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequestV1.java
##
@@ -0,0 +1,484 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.*;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmDirectoryInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OzoneFSUtils;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.CreateDirectoryRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.KeyArgs;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.jetbrains.annotations.NotNull;
+import org.junit.*;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status.VOLUME_NOT_FOUND;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Test OM directory create request V1 layout version.
+ */
+public class TestOMDirectoryCreateRequestV1 {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+  private AuditLogger auditLogger;
+  // Just setting ozoneManagerDoubleBuffer which does nothing.
+  private OzoneManagerDoubleBufferHelper ozoneManagerDoubleBufferHelper =
+  ((response, transactionIndex) -> {
+return null;
+  });
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+auditLogger = Mockito.mock(AuditLogger.class);
+when(ozoneManager.getAuditLogger()).thenReturn(auditLogger);
+Mockito.doNothing().when(auditLogger).logWrite(any(AuditMessage.class));
+when(ozoneManager.resolveBucketLink(any(KeyArgs.class),
+any(OMClientRequest.class)))
+.thenReturn(new ResolvedBucket(Pair.of("", ""), Pair.of("", "")));
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+Mockito.framework().clearInlineMocks();
+  }
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = "vol1";
+String bucketName = "bucket1";
+String keyName = "a/b/c";
+
+TestOMRequestUtils.addVolumeAndBucketToDB(volumeName, bucketName,
+omMetadataManager);
+
+OMRequest omRequest = createDirectoryRequest(volumeName, bucketName,
+keyName);
+OMDirectoryCreateRequestV1 omDirectoryCreateRequestV1 =
+

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496824421



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -129,6 +134,131 @@ public static OMPathInfo verifyFilesInPath(
 return new OMPathInfo(missing, OMDirectoryResult.NONE, inheritAcls);
   }
 
+  /**
+   * Verify any dir/key exist in the given path in the specified
+   * volume/bucket by iterating through directory table.
+   *
+   * @param omMetadataManager OM Metadata manager
+   * @param volumeNamevolume name
+   * @param bucketNamebucket name
+   * @param keyName   key name
+   * @param keyPath   path
+   * @return OMPathInfoV1 path info object
+   * @throws IOException on DB failure
+   */
+  public static OMPathInfoV1 verifyDirectoryKeysInPath(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String leafNodeName = OzoneFSUtils.getFileName(keyName);
+List missing = new ArrayList<>();
+List inheritAcls = new ArrayList<>();
+OMDirectoryResult result = OMDirectoryResult.NONE;
+
+Iterator elements = keyPath.iterator();
+// TODO: volume id and bucket id generation logic.

Review comment:
   Sure





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


bharatviswa504 commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496823617



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -129,6 +134,131 @@ public static OMPathInfo verifyFilesInPath(
 return new OMPathInfo(missing, OMDirectoryResult.NONE, inheritAcls);
   }
 
+  /**
+   * Verify any dir/key exist in the given path in the specified
+   * volume/bucket by iterating through directory table.
+   *
+   * @param omMetadataManager OM Metadata manager
+   * @param volumeNamevolume name
+   * @param bucketNamebucket name
+   * @param keyName   key name
+   * @param keyPath   path
+   * @return OMPathInfoV1 path info object
+   * @throws IOException on DB failure
+   */
+  public static OMPathInfoV1 verifyDirectoryKeysInPath(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String leafNodeName = OzoneFSUtils.getFileName(keyName);
+List missing = new ArrayList<>();
+List inheritAcls = new ArrayList<>();
+OMDirectoryResult result = OMDirectoryResult.NONE;
+
+Iterator elements = keyPath.iterator();
+// TODO: volume id and bucket id generation logic.
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+OmBucketInfo omBucketInfo =
+omMetadataManager.getBucketTable().get(bucketKey);
+inheritAcls = omBucketInfo.getAcls();
+long lastKnownParentId = omBucketInfo.getObjectID();
+OmDirectoryInfo parentDirInfo = null;
+String dbDirName = ""; // absolute path for trace logs
+// for better logging
+StringBuilder fullKeyPath = new StringBuilder(bucketKey);
+while (elements.hasNext()) {
+  String fileName = elements.next().toString();
+  fullKeyPath.append(OzoneConsts.OM_KEY_PREFIX);
+  fullKeyPath.append(fileName);
+  if (missing.size() > 0) {
+// Add all the sub-dirs to the missing list except the leaf element.
+// For example, /vol1/buck1/a/b/c/d/e/f/file1.txt.
+// Assume /vol1/buck1/a/b/c exists, then add d, e, f into missing list.
+if(elements.hasNext()){
+  // skips leaf node.
+  missing.add(fileName);
+}
+continue;
+  }
+
+  // For example, /vol1/buck1/a/b/c/d/e/f/file1.txt
+  // 1. Do lookup on directoryTable. If not exists goto next step.
+  // 2. Do look on keyTable. If not exists goto next step.
+  // 3. Add 'sub-dir' to missing parents list
+  String dbNodeName = omMetadataManager.getOzonePathKey(
+  lastKnownParentId, fileName);
+  OmDirectoryInfo omDirInfo = omMetadataManager.getDirectoryTable().
+  get(dbNodeName);
+  if (omDirInfo != null) {
+dbDirName += omDirInfo.getName() + OzoneConsts.OZONE_URI_DELIMITER;
+if (elements.hasNext()) {
+  result = OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+  lastKnownParentId = omDirInfo.getObjectID();
+  inheritAcls = omDirInfo.getAcls();
+  continue;
+} else {
+  // Checked all the sub-dirs till the leaf node.
+  // Found a directory in the given path.
+  result = OMDirectoryResult.DIRECTORY_EXISTS;
+}
+  } else {
+// Get parentID from the lastKnownParent. For any files, directly under
+// the bucket, the parent is the bucketID. Say, "/vol1/buck1/file1"
+// TODO: Need to add UT for this case along with OMFileCreateRequest.
+if (omMetadataManager.getKeyTable().isExist(dbNodeName)) {
+  if (elements.hasNext()) {
+// Found a file in the given key name.
+result = OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+  } else {
+// Checked all the sub-dirs till the leaf file.
+// Found a file with the given key name.
+result = OMDirectoryResult.FILE_EXISTS;
+  }
+  break; // Skip directory traversal as it hits key.
+}
+
+// Add to missing list, there is no such file/directory with given 
name.
+if (elements.hasNext()) {
+  missing.add(fileName);
+}
+
+String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+bucketName, dbDirName);
+LOG.trace("Acls inherited from parent " + dbDirKeyName + " are : "
++ inheritAcls);
+  }
+}
+
+if (result == OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH) {

Review comment:
   Ya special handling if removed that is fine.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496821295



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequestV1.java
##
@@ -0,0 +1,323 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMDirectoryCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.Status;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.FILE_ALREADY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.INVALID_KEY_NAME;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handle create directory request. It will add path components to the 
directory
+ * table and maintains file system semantics.
+ */
+public class OMDirectoryCreateRequestV1 extends OMDirectoryCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMDirectoryCreateRequestV1.class);
+
+  public OMDirectoryCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateDirectoryRequest createDirectoryRequest = getOmRequest()
+.getCreateDirectoryRequest();
+KeyArgs keyArgs = createDirectoryRequest.getKeyArgs();
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+int numKeysCreated = 0;
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
omResponse.setCreateDirectoryResponse(CreateDirectoryResponse.newBuilder());
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateDirectory();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+boolean acquiredLock = false;
+IOException exception 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496820524



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequestV1.java
##
@@ -0,0 +1,312 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMDirectoryCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.Status;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.FILE_ALREADY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.INVALID_KEY_NAME;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handle create directory request. It will add path components to the 
directory
+ * table and maintains file system semantics.
+ */
+public class OMDirectoryCreateRequestV1 extends OMDirectoryCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMDirectoryCreateRequestV1.class);
+
+  public OMDirectoryCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateDirectoryRequest createDirectoryRequest = getOmRequest()
+.getCreateDirectoryRequest();
+KeyArgs keyArgs = createDirectoryRequest.getKeyArgs();
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
omResponse.setCreateDirectoryResponse(CreateDirectoryResponse.newBuilder());
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateDirectory();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+boolean acquiredLock = false;
+IOException exception = null;
+OMClientResponse 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496819566



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -129,6 +134,131 @@ public static OMPathInfo verifyFilesInPath(
 return new OMPathInfo(missing, OMDirectoryResult.NONE, inheritAcls);
   }
 
+  /**
+   * Verify any dir/key exist in the given path in the specified
+   * volume/bucket by iterating through directory table.
+   *
+   * @param omMetadataManager OM Metadata manager
+   * @param volumeNamevolume name
+   * @param bucketNamebucket name
+   * @param keyName   key name
+   * @param keyPath   path
+   * @return OMPathInfoV1 path info object
+   * @throws IOException on DB failure
+   */
+  public static OMPathInfoV1 verifyDirectoryKeysInPath(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String leafNodeName = OzoneFSUtils.getFileName(keyName);
+List missing = new ArrayList<>();
+List inheritAcls = new ArrayList<>();
+OMDirectoryResult result = OMDirectoryResult.NONE;
+
+Iterator elements = keyPath.iterator();
+// TODO: volume id and bucket id generation logic.
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+OmBucketInfo omBucketInfo =
+omMetadataManager.getBucketTable().get(bucketKey);
+inheritAcls = omBucketInfo.getAcls();
+long lastKnownParentId = omBucketInfo.getObjectID();
+OmDirectoryInfo parentDirInfo = null;
+String dbDirName = ""; // absolute path for trace logs
+// for better logging
+StringBuilder fullKeyPath = new StringBuilder(bucketKey);
+while (elements.hasNext()) {
+  String fileName = elements.next().toString();
+  fullKeyPath.append(OzoneConsts.OM_KEY_PREFIX);
+  fullKeyPath.append(fileName);
+  if (missing.size() > 0) {
+// Add all the sub-dirs to the missing list except the leaf element.
+// For example, /vol1/buck1/a/b/c/d/e/f/file1.txt.
+// Assume /vol1/buck1/a/b/c exists, then add d, e, f into missing list.
+if(elements.hasNext()){
+  // skips leaf node.
+  missing.add(fileName);
+}
+continue;
+  }
+
+  // For example, /vol1/buck1/a/b/c/d/e/f/file1.txt
+  // 1. Do lookup on directoryTable. If not exists goto next step.
+  // 2. Do look on keyTable. If not exists goto next step.
+  // 3. Add 'sub-dir' to missing parents list
+  String dbNodeName = omMetadataManager.getOzonePathKey(
+  lastKnownParentId, fileName);
+  OmDirectoryInfo omDirInfo = omMetadataManager.getDirectoryTable().
+  get(dbNodeName);
+  if (omDirInfo != null) {
+dbDirName += omDirInfo.getName() + OzoneConsts.OZONE_URI_DELIMITER;
+if (elements.hasNext()) {
+  result = OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+  lastKnownParentId = omDirInfo.getObjectID();
+  inheritAcls = omDirInfo.getAcls();
+  continue;
+} else {
+  // Checked all the sub-dirs till the leaf node.
+  // Found a directory in the given path.
+  result = OMDirectoryResult.DIRECTORY_EXISTS;
+}
+  } else {
+// Get parentID from the lastKnownParent. For any files, directly under
+// the bucket, the parent is the bucketID. Say, "/vol1/buck1/file1"
+// TODO: Need to add UT for this case along with OMFileCreateRequest.
+if (omMetadataManager.getKeyTable().isExist(dbNodeName)) {
+  if (elements.hasNext()) {
+// Found a file in the given key name.
+result = OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+  } else {
+// Checked all the sub-dirs till the leaf file.
+// Found a file with the given key name.
+result = OMDirectoryResult.FILE_EXISTS;
+  }
+  break; // Skip directory traversal as it hits key.
+}
+
+// Add to missing list, there is no such file/directory with given 
name.
+if (elements.hasNext()) {
+  missing.add(fileName);
+}
+
+String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+bucketName, dbDirName);
+LOG.trace("Acls inherited from parent " + dbDirKeyName + " are : "
++ inheritAcls);
+  }
+}
+
+if (result == OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH) {
+  String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+  bucketName, dbDirName);
+  LOG.trace("Acls inherited from parent " + dbDirKeyName + " are : "
+  + inheritAcls);
+}
+
+if (result != OMDirectoryResult.NONE) {
+  LOG.trace("verifyFiles in 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496818679



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -129,6 +134,131 @@ public static OMPathInfo verifyFilesInPath(
 return new OMPathInfo(missing, OMDirectoryResult.NONE, inheritAcls);
   }
 
+  /**
+   * Verify any dir/key exist in the given path in the specified
+   * volume/bucket by iterating through directory table.
+   *
+   * @param omMetadataManager OM Metadata manager
+   * @param volumeNamevolume name
+   * @param bucketNamebucket name
+   * @param keyName   key name
+   * @param keyPath   path
+   * @return OMPathInfoV1 path info object
+   * @throws IOException on DB failure
+   */
+  public static OMPathInfoV1 verifyDirectoryKeysInPath(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String leafNodeName = OzoneFSUtils.getFileName(keyName);
+List missing = new ArrayList<>();
+List inheritAcls = new ArrayList<>();
+OMDirectoryResult result = OMDirectoryResult.NONE;
+
+Iterator elements = keyPath.iterator();
+// TODO: volume id and bucket id generation logic.
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+OmBucketInfo omBucketInfo =
+omMetadataManager.getBucketTable().get(bucketKey);
+inheritAcls = omBucketInfo.getAcls();
+long lastKnownParentId = omBucketInfo.getObjectID();
+OmDirectoryInfo parentDirInfo = null;
+String dbDirName = ""; // absolute path for trace logs
+// for better logging
+StringBuilder fullKeyPath = new StringBuilder(bucketKey);
+while (elements.hasNext()) {
+  String fileName = elements.next().toString();
+  fullKeyPath.append(OzoneConsts.OM_KEY_PREFIX);
+  fullKeyPath.append(fileName);
+  if (missing.size() > 0) {
+// Add all the sub-dirs to the missing list except the leaf element.
+// For example, /vol1/buck1/a/b/c/d/e/f/file1.txt.
+// Assume /vol1/buck1/a/b/c exists, then add d, e, f into missing list.
+if(elements.hasNext()){
+  // skips leaf node.
+  missing.add(fileName);
+}
+continue;
+  }
+
+  // For example, /vol1/buck1/a/b/c/d/e/f/file1.txt
+  // 1. Do lookup on directoryTable. If not exists goto next step.
+  // 2. Do look on keyTable. If not exists goto next step.
+  // 3. Add 'sub-dir' to missing parents list
+  String dbNodeName = omMetadataManager.getOzonePathKey(
+  lastKnownParentId, fileName);
+  OmDirectoryInfo omDirInfo = omMetadataManager.getDirectoryTable().
+  get(dbNodeName);
+  if (omDirInfo != null) {
+dbDirName += omDirInfo.getName() + OzoneConsts.OZONE_URI_DELIMITER;
+if (elements.hasNext()) {
+  result = OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+  lastKnownParentId = omDirInfo.getObjectID();
+  inheritAcls = omDirInfo.getAcls();
+  continue;
+} else {
+  // Checked all the sub-dirs till the leaf node.
+  // Found a directory in the given path.
+  result = OMDirectoryResult.DIRECTORY_EXISTS;
+}
+  } else {
+// Get parentID from the lastKnownParent. For any files, directly under
+// the bucket, the parent is the bucketID. Say, "/vol1/buck1/file1"
+// TODO: Need to add UT for this case along with OMFileCreateRequest.
+if (omMetadataManager.getKeyTable().isExist(dbNodeName)) {
+  if (elements.hasNext()) {
+// Found a file in the given key name.
+result = OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+  } else {
+// Checked all the sub-dirs till the leaf file.
+// Found a file with the given key name.
+result = OMDirectoryResult.FILE_EXISTS;
+  }
+  break; // Skip directory traversal as it hits key.
+}
+
+// Add to missing list, there is no such file/directory with given 
name.
+if (elements.hasNext()) {
+  missing.add(fileName);
+}
+
+String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+bucketName, dbDirName);
+LOG.trace("Acls inherited from parent " + dbDirKeyName + " are : "
++ inheritAcls);
+  }
+}
+
+if (result == OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH) {

Review comment:
   As this is a utility function, I would like to return `fullKeyPath` this 
as well. Caller can take a decision to use it or not.
   
   I will remove DIRECTORY_EXISTS_IN_GIVENPATH check here. But I would like to 
return result with DIRECTORY_EXISTS_IN_GIVENPATH just to make it 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496818679



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMFileRequest.java
##
@@ -129,6 +134,131 @@ public static OMPathInfo verifyFilesInPath(
 return new OMPathInfo(missing, OMDirectoryResult.NONE, inheritAcls);
   }
 
+  /**
+   * Verify any dir/key exist in the given path in the specified
+   * volume/bucket by iterating through directory table.
+   *
+   * @param omMetadataManager OM Metadata manager
+   * @param volumeNamevolume name
+   * @param bucketNamebucket name
+   * @param keyName   key name
+   * @param keyPath   path
+   * @return OMPathInfoV1 path info object
+   * @throws IOException on DB failure
+   */
+  public static OMPathInfoV1 verifyDirectoryKeysInPath(
+  @Nonnull OMMetadataManager omMetadataManager,
+  @Nonnull String volumeName,
+  @Nonnull String bucketName, @Nonnull String keyName,
+  @Nonnull Path keyPath) throws IOException {
+
+String leafNodeName = OzoneFSUtils.getFileName(keyName);
+List missing = new ArrayList<>();
+List inheritAcls = new ArrayList<>();
+OMDirectoryResult result = OMDirectoryResult.NONE;
+
+Iterator elements = keyPath.iterator();
+// TODO: volume id and bucket id generation logic.
+String bucketKey = omMetadataManager.getBucketKey(volumeName, bucketName);
+OmBucketInfo omBucketInfo =
+omMetadataManager.getBucketTable().get(bucketKey);
+inheritAcls = omBucketInfo.getAcls();
+long lastKnownParentId = omBucketInfo.getObjectID();
+OmDirectoryInfo parentDirInfo = null;
+String dbDirName = ""; // absolute path for trace logs
+// for better logging
+StringBuilder fullKeyPath = new StringBuilder(bucketKey);
+while (elements.hasNext()) {
+  String fileName = elements.next().toString();
+  fullKeyPath.append(OzoneConsts.OM_KEY_PREFIX);
+  fullKeyPath.append(fileName);
+  if (missing.size() > 0) {
+// Add all the sub-dirs to the missing list except the leaf element.
+// For example, /vol1/buck1/a/b/c/d/e/f/file1.txt.
+// Assume /vol1/buck1/a/b/c exists, then add d, e, f into missing list.
+if(elements.hasNext()){
+  // skips leaf node.
+  missing.add(fileName);
+}
+continue;
+  }
+
+  // For example, /vol1/buck1/a/b/c/d/e/f/file1.txt
+  // 1. Do lookup on directoryTable. If not exists goto next step.
+  // 2. Do look on keyTable. If not exists goto next step.
+  // 3. Add 'sub-dir' to missing parents list
+  String dbNodeName = omMetadataManager.getOzonePathKey(
+  lastKnownParentId, fileName);
+  OmDirectoryInfo omDirInfo = omMetadataManager.getDirectoryTable().
+  get(dbNodeName);
+  if (omDirInfo != null) {
+dbDirName += omDirInfo.getName() + OzoneConsts.OZONE_URI_DELIMITER;
+if (elements.hasNext()) {
+  result = OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH;
+  lastKnownParentId = omDirInfo.getObjectID();
+  inheritAcls = omDirInfo.getAcls();
+  continue;
+} else {
+  // Checked all the sub-dirs till the leaf node.
+  // Found a directory in the given path.
+  result = OMDirectoryResult.DIRECTORY_EXISTS;
+}
+  } else {
+// Get parentID from the lastKnownParent. For any files, directly under
+// the bucket, the parent is the bucketID. Say, "/vol1/buck1/file1"
+// TODO: Need to add UT for this case along with OMFileCreateRequest.
+if (omMetadataManager.getKeyTable().isExist(dbNodeName)) {
+  if (elements.hasNext()) {
+// Found a file in the given key name.
+result = OMDirectoryResult.FILE_EXISTS_IN_GIVENPATH;
+  } else {
+// Checked all the sub-dirs till the leaf file.
+// Found a file with the given key name.
+result = OMDirectoryResult.FILE_EXISTS;
+  }
+  break; // Skip directory traversal as it hits key.
+}
+
+// Add to missing list, there is no such file/directory with given 
name.
+if (elements.hasNext()) {
+  missing.add(fileName);
+}
+
+String dbDirKeyName = omMetadataManager.getOzoneDirKey(volumeName,
+bucketName, dbDirName);
+LOG.trace("Acls inherited from parent " + dbDirKeyName + " are : "
++ inheritAcls);
+  }
+}
+
+if (result == OMDirectoryResult.DIRECTORY_EXISTS_IN_GIVENPATH) {

Review comment:
   As this is a utility function, I would like to return `fullKeyPath` this 
as well. Caller can take a decision to use it or not.
   
   I will remove DIRECTORY_EXISTS_IN_GIVENPATH check here. But I would like to 
return result with DIRECTORY_EXISTS_IN_GIVENPATH just to make it 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496815568



##
File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/request/file/OMDirectoryCreateRequestV1.java
##
@@ -0,0 +1,323 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.commons.lang3.tuple.ImmutablePair;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.OMAction;
+import org.apache.hadoop.ozone.om.OMMetadataManager;
+import org.apache.hadoop.ozone.om.OMMetrics;
+import org.apache.hadoop.ozone.om.OzoneManager;
+import org.apache.hadoop.ozone.om.exceptions.OMException;
+import org.apache.hadoop.ozone.om.helpers.*;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.util.OmResponseUtil;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.om.response.file.OMDirectoryCreateResponseV1;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.CreateDirectoryResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.KeyArgs;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMRequest;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.OMResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos
+.Status;
+import org.apache.hadoop.ozone.security.acl.IAccessAuthorizer;
+import org.apache.hadoop.ozone.security.acl.OzoneObj;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.nio.file.Path;
+import java.nio.file.Paths;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.FILE_ALREADY_EXISTS;
+import static 
org.apache.hadoop.ozone.om.exceptions.OMException.ResultCodes.INVALID_KEY_NAME;
+import static 
org.apache.hadoop.ozone.om.lock.OzoneManagerLock.Resource.BUCKET_LOCK;
+import static 
org.apache.hadoop.ozone.om.request.file.OMFileRequest.OMDirectoryResult.*;
+
+/**
+ * Handle create directory request. It will add path components to the 
directory
+ * table and maintains file system semantics.
+ */
+public class OMDirectoryCreateRequestV1 extends OMDirectoryCreateRequest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(OMDirectoryCreateRequestV1.class);
+
+  public OMDirectoryCreateRequestV1(OMRequest omRequest) {
+super(omRequest);
+  }
+
+  @Override
+  public OMClientResponse validateAndUpdateCache(OzoneManager ozoneManager,
+  long trxnLogIndex, OzoneManagerDoubleBufferHelper omDoubleBufferHelper) {
+
+CreateDirectoryRequest createDirectoryRequest = getOmRequest()
+.getCreateDirectoryRequest();
+KeyArgs keyArgs = createDirectoryRequest.getKeyArgs();
+
+String volumeName = keyArgs.getVolumeName();
+String bucketName = keyArgs.getBucketName();
+String keyName = keyArgs.getKeyName();
+int numKeysCreated = 0;
+
+OMResponse.Builder omResponse = OmResponseUtil.getOMResponseBuilder(
+getOmRequest());
+
omResponse.setCreateDirectoryResponse(CreateDirectoryResponse.newBuilder());
+OMMetrics omMetrics = ozoneManager.getMetrics();
+omMetrics.incNumCreateDirectory();
+
+AuditLogger auditLogger = ozoneManager.getAuditLogger();
+OzoneManagerProtocolProtos.UserInfo userInfo = 
getOmRequest().getUserInfo();
+
+Map auditMap = buildKeyArgsAuditMap(keyArgs);
+OMMetadataManager omMetadataManager = ozoneManager.getMetadataManager();
+boolean acquiredLock = false;
+IOException exception 

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496813934



##
File path: 
hadoop-ozone/ozone-manager/src/test/java/org/apache/hadoop/ozone/om/request/file/TestOMDirectoryCreateRequestV1.java
##
@@ -0,0 +1,484 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.om.request.file;
+
+import com.google.common.base.Optional;
+import org.apache.commons.lang3.RandomStringUtils;
+import org.apache.commons.lang3.tuple.Pair;
+import org.apache.hadoop.hdds.conf.OzoneConfiguration;
+import org.apache.hadoop.hdds.protocol.proto.HddsProtos;
+import org.apache.hadoop.hdds.utils.db.cache.CacheKey;
+import org.apache.hadoop.hdds.utils.db.cache.CacheValue;
+import org.apache.hadoop.ozone.audit.AuditLogger;
+import org.apache.hadoop.ozone.audit.AuditMessage;
+import org.apache.hadoop.ozone.om.*;
+import org.apache.hadoop.ozone.om.helpers.OmBucketInfo;
+import org.apache.hadoop.ozone.om.helpers.OmDirectoryInfo;
+import org.apache.hadoop.ozone.om.helpers.OmKeyInfo;
+import org.apache.hadoop.ozone.om.helpers.OzoneFSUtils;
+import org.apache.hadoop.ozone.om.ratis.utils.OzoneManagerDoubleBufferHelper;
+import org.apache.hadoop.ozone.om.request.OMClientRequest;
+import org.apache.hadoop.ozone.om.request.TestOMRequestUtils;
+import org.apache.hadoop.ozone.om.response.OMClientResponse;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.CreateDirectoryRequest;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.KeyArgs;
+import 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.OMRequest;
+import org.jetbrains.annotations.NotNull;
+import org.junit.*;
+import org.junit.rules.TemporaryFolder;
+import org.mockito.Mockito;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.UUID;
+
+import static 
org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos.Status.VOLUME_NOT_FOUND;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.Mockito.when;
+
+/**
+ * Test OM directory create request V1 layout version.
+ */
+public class TestOMDirectoryCreateRequestV1 {
+
+  @Rule
+  public TemporaryFolder folder = new TemporaryFolder();
+
+  private OzoneManager ozoneManager;
+  private OMMetrics omMetrics;
+  private OMMetadataManager omMetadataManager;
+  private AuditLogger auditLogger;
+  // Just setting ozoneManagerDoubleBuffer which does nothing.
+  private OzoneManagerDoubleBufferHelper ozoneManagerDoubleBufferHelper =
+  ((response, transactionIndex) -> {
+return null;
+  });
+
+  @Before
+  public void setup() throws Exception {
+ozoneManager = Mockito.mock(OzoneManager.class);
+omMetrics = OMMetrics.create();
+OzoneConfiguration ozoneConfiguration = new OzoneConfiguration();
+ozoneConfiguration.set(OMConfigKeys.OZONE_OM_DB_DIRS,
+folder.newFolder().getAbsolutePath());
+omMetadataManager = new OmMetadataManagerImpl(ozoneConfiguration);
+when(ozoneManager.getMetrics()).thenReturn(omMetrics);
+when(ozoneManager.getMetadataManager()).thenReturn(omMetadataManager);
+auditLogger = Mockito.mock(AuditLogger.class);
+when(ozoneManager.getAuditLogger()).thenReturn(auditLogger);
+Mockito.doNothing().when(auditLogger).logWrite(any(AuditMessage.class));
+when(ozoneManager.resolveBucketLink(any(KeyArgs.class),
+any(OMClientRequest.class)))
+.thenReturn(new ResolvedBucket(Pair.of("", ""), Pair.of("", "")));
+  }
+
+  @After
+  public void stop() {
+omMetrics.unRegister();
+Mockito.framework().clearInlineMocks();
+  }
+
+  @Test
+  public void testPreExecute() throws Exception {
+String volumeName = "vol1";
+String bucketName = "bucket1";
+String keyName = "a/b/c";
+
+TestOMRequestUtils.addVolumeAndBucketToDB(volumeName, bucketName,
+omMetadataManager);
+
+OMRequest omRequest = createDirectoryRequest(volumeName, bucketName,
+keyName);
+OMDirectoryCreateRequestV1 omDirectoryCreateRequestV1 =
+

[GitHub] [hadoop-ozone] rakeshadr commented on a change in pull request #1404: HDDS-2949: mkdir : store directory entries in a separate table

2020-09-29 Thread GitBox


rakeshadr commented on a change in pull request #1404:
URL: https://github.com/apache/hadoop-ozone/pull/1404#discussion_r496812987



##
File path: 
hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/helpers/OmDirectoryInfo.java
##
@@ -0,0 +1,266 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ * 
+ * http://www.apache.org/licenses/LICENSE-2.0
+ * 
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.ozone.om.helpers;
+
+import org.apache.hadoop.ozone.OzoneAcl;
+import org.apache.hadoop.ozone.OzoneConsts;
+import org.apache.hadoop.ozone.protocol.proto.OzoneManagerProtocolProtos;
+
+import java.util.*;
+
+/**
+ * This class represents the directory information by keeping each component
+ * in the user given path and a pointer to its parent directory element in the
+ * path. Also, it stores directory node related metdata details.
+ */
+public class OmDirectoryInfo extends WithObjectID {
+  private long parentObjectID; // pointer to parent directory
+
+  private String name; // directory name
+
+  private long creationTime;
+  private long modificationTime;
+
+  private List acls;
+
+  public OmDirectoryInfo(Builder builder) {
+this.name = builder.name;
+this.acls = builder.acls;
+this.metadata = builder.metadata;
+this.objectID = builder.objectID;
+this.updateID = builder.updateID;
+this.parentObjectID = builder.parentObjectID;
+this.creationTime = builder.creationTime;
+this.modificationTime = builder.modificationTime;
+  }
+
+  /**
+   * Returns new builder class that builds a OmPrefixInfo.
+   *
+   * @return Builder
+   */
+  public static OmDirectoryInfo.Builder newBuilder() {
+return new OmDirectoryInfo.Builder();
+  }
+
+  /**
+   * Builder for Directory Info.
+   */
+  public static class Builder {
+private long parentObjectID; // pointer to parent directory
+
+private long objectID;
+private long updateID;
+
+private String name;
+
+private long creationTime;
+private long modificationTime;
+
+private List acls;
+private Map metadata;
+
+public Builder() {
+  //Default values
+  this.acls = new LinkedList<>();
+  this.metadata = new HashMap<>();
+}
+
+public Builder setParentObjectID(long parentObjectId) {
+  this.parentObjectID = parentObjectId;
+  return this;
+}
+
+public Builder setObjectID(long objectId) {
+  this.objectID = objectId;
+  return this;
+}
+
+public Builder setUpdateID(long updateId) {
+  this.updateID = updateId;
+  return this;
+}
+
+public Builder setName(String dirName) {
+  this.name = dirName;
+  return this;
+}
+
+public Builder setCreationTime(long newCreationTime) {
+  this.creationTime = newCreationTime;
+  return this;
+}
+
+public Builder setModificationTime(long newModificationTime) {
+  this.modificationTime = newModificationTime;
+  return this;
+}
+
+public Builder setAcls(List listOfAcls) {
+  if (listOfAcls != null) {
+this.acls.addAll(listOfAcls);
+  }
+  return this;
+}
+
+public Builder addAcl(OzoneAcl ozoneAcl) {
+  if (ozoneAcl != null) {
+this.acls.add(ozoneAcl);
+  }
+  return this;
+}
+
+public Builder addMetadata(String key, String value) {
+  metadata.put(key, value);
+  return this;
+}
+
+public Builder addAllMetadata(Map additionalMetadata) {
+  if (additionalMetadata != null) {
+metadata.putAll(additionalMetadata);
+  }
+  return this;
+}
+
+public OmDirectoryInfo build() {
+  return new OmDirectoryInfo(this);
+}
+  }
+
+  @Override
+  public String toString() {
+return getObjectID() + ":" + getName();

Review comment:
   Will this get confused with the value by getPath(). This will return 
"`getParentObjectID() + OzoneConsts.OM_KEY_PREFIX + getName()"`
   
   I don't have strong opinion. If you feel then I can make toString() also 
like `getObjectID() + OzoneConsts.OM_KEY_PREFIX + getName()`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this 

[jira] [Updated] (HDDS-4285) Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4285:
-
Labels: pull-request-available  (was: )

> Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai opened a new pull request #1454: HDDS-4285. Read is slow due to the frequent usage of UGI.getCurrentUser Call()

2020-09-29 Thread GitBox


adoroszlai opened a new pull request #1454:
URL: https://github.com/apache/hadoop-ozone/pull/1454


   ## What changes were proposed in this pull request?
   
   Reduce the number of `getCurrentUser()` and `getTokens()` calls performed 
during some `ContainerProtocolCalls` operations.  This is achieved by getting 
the tokens once in `BlockInputStream` and `BlockOutputStream` initialization, 
and passing them to `getBlock`, `putBlock`, `readChunk`, `writeChunk` calls.
   
   https://issues.apache.org/jira/browse/HDDS-4285
   
   ## How was this patch tested?
   
   Verified that time required for the repro unit test is improved.
   
   Without the patch:
   
   ```
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
112.829 s - in org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit
   ```
   
   With the patch:
   
   ```
   [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
30.293 s - in org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit
   ```
   
   Regular CI:
   https://github.com/adoroszlai/hadoop-ozone/runs/1181129921



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4285) Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()

2020-09-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4285:
---
Summary: Read is slow due to frequent calls to UGI.getCurrentUser() and 
getTokens()  (was: Read is slow due to the frequent usage of 
UGI.getCurrentUserCall())

> Read is slow due to frequent calls to UGI.getCurrentUser() and getTokens()
> --
>
> Key: HDDS-4285
> URL: https://issues.apache.org/jira/browse/HDDS-4285
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
> Attachments: image-2020-09-28-16-19-17-581.png, 
> profile-20200928-161631-180518.svg
>
>
> Ozone read operation turned out to be slow mainly because we do a new 
> UGI.getCurrentUser for block token for each of the calls.
> We need to cache the block token / UGI.getCurrentUserCall() to make it faster.
>  !image-2020-09-28-16-19-17-581.png! 
> To reproduce:
> Checkout: https://github.com/elek/hadoop-ozone/tree/mocked-read
> {code}
> cd hadoop-ozone/client
> export 
> MAVEN_OPTS=-agentpath:/home/elek/prog/async-profiler/build/libasyncProfiler.so=start,file=/tmp/profile-%t-%p.svg
> mvn compile exec:java 
> -Dexec.mainClass=org.apache.hadoop.ozone.client.io.TestKeyOutputStreamUnit 
> -Dexec.classpathScope=test
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4291) "GDPR Compliance" Feature Should Be Renamed

2020-09-29 Thread Michael O'Kane (Jira)
Michael O'Kane created HDDS-4291:


 Summary: "GDPR Compliance" Feature Should Be Renamed
 Key: HDDS-4291
 URL: https://issues.apache.org/jira/browse/HDDS-4291
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Michael O'Kane


Under HDDS-2012 a feature was added to Ozone that implemented transparent 
encryption with per-block encryption, facilitating secure, synchronous 
cryptographic erasure of data blocks.

This feature has been billed as "GDPR compliance", both in documentation and in 
the flags employed to enable the mode. This terminology should be strictly 
avoided for a number of reasons:
 * Data disposal mechanisms are but a tiny part of a GDPR compliance picture. 
GDPR is a complex organisation that principally concerns itself with 
organisational measures such as impact assessments, collection justification 
and privacy-by-design.
 * Specifically in this case there is nothing within the text of GDPR that 
precludes the use of soft deletions/tombstones/garbage collection mechanisms 
for disposal of PII. The right to erasure text was specifically crafted to use 
the term "_undue_ delay" - this does not mean as quickly as is physically 
possible.

As such there is a significant risk of the feature misleading users into 
believing its application is necessary for GDPR compliance (it isn't) or will 
make their data storage GDPR compliant (it won't).

 

The feature should be renamed to something more accurate, such as Strict 
Deletion Mode or Secure Deletion Mode.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-29 Thread GitBox


captainzmc commented on pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#issuecomment-700751283


   Thanks @adoroszlai’s review, I have already fixed the review issues, can you 
 help take another look? [CI is normal in my personal 
branch.](https://github.com/captainzmc/hadoop-ozone/runs/1182140007)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4290) Enable insight point for SCM heartbeat protocol

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4290:
-
Labels: pull-request-available  (was: )

> Enable insight point for SCM heartbeat protocol
> ---
>
> Key: HDDS-4290
> URL: https://issues.apache.org/jira/browse/HDDS-4290
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Marton Elek
>Assignee: Marton Elek
>Priority: Major
>  Labels: pull-request-available
>
> The registration of the already implemented insigh-tpoint seems to be missing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4184) Add Features menu for Chinese document.

2020-09-29 Thread Zheng Huang-Mu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Huang-Mu updated HDDS-4184:
-
Labels: newbie  (was: )

> Add Features menu for Chinese document.
> ---
>
> Key: HDDS-4184
> URL: https://issues.apache.org/jira/browse/HDDS-4184
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Zheng Huang-Mu
>Priority: Minor
>  Labels: newbie
> Attachments: image-2020-09-01-14-24-44-703.png
>
>
> In English document, there is a *Features* menu, and *GDPR* is *Feature's* 
> submenu.
>  So we can add *Features* menu and change *GDPR* to *Features* submenu in 
> Chinese document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4258) Set GDPR to a Security submenu in EN and CN document.

2020-09-29 Thread Zheng Huang-Mu (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Huang-Mu updated HDDS-4258:
-
Labels: newbie  (was: )

> Set GDPR to a Security submenu in EN and CN document.
> -
>
> Key: HDDS-4258
> URL: https://issues.apache.org/jira/browse/HDDS-4258
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Zheng Huang-Mu
>Priority: Minor
>  Labels: newbie
>
> Base on [~xyao] comment on HDDS-4156.
> https://github.com/apache/hadoop-ozone/pull/1368#issuecomment-694532324
> Set GDPR to a Security submenu in EN and CN document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #1453: HDDS-4290. Enable insight point for SCM heartbeat protocol

2020-09-29 Thread GitBox


elek opened a new pull request #1453:
URL: https://github.com/apache/hadoop-ozone/pull/1453


   ## What changes were proposed in this pull request?
   
   The registration of the already implemented insight-point seems to be 
missing.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4290
   
   ## How was this patch tested?
   
   ```
   ozone insight logs -v scm.protocol.heartbeat
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4290) Enable insight point for SCM heartbeat protocol

2020-09-29 Thread Marton Elek (Jira)
Marton Elek created HDDS-4290:
-

 Summary: Enable insight point for SCM heartbeat protocol
 Key: HDDS-4290
 URL: https://issues.apache.org/jira/browse/HDDS-4290
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Marton Elek
Assignee: Marton Elek


The registration of the already implemented insigh-tpoint seems to be missing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4288) the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-4288:
-
Labels: pull-request-available  (was: )

> the icon of hadoop-ozone is bigger than ever
> 
>
> Key: HDDS-4288
> URL: https://issues.apache.org/jira/browse/HDDS-4288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
> Environment: web : chrome /firefox /safari
>Reporter: Shiyou xin
>Assignee: Marton Elek
>Priority: Trivial
>  Labels: pull-request-available
> Attachments: 1751601366944_.pic.jpg
>
>
> It could be a by-product of the introduction of the issue: 
> https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek opened a new pull request #1452: HDDS-4288. the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread GitBox


elek opened a new pull request #1452:
URL: https://github.com/apache/hadoop-ozone/pull/1452


   ## What changes were proposed in this pull request?
   
   Logo is too big on doc snapshot (generated by the Jenkins):
   
   
![image](https://user-images.githubusercontent.com/170549/94560414-eb243000-0262-11eb-9667-1652787200f7.png)
   
   INFRA is migrated to a new jenkins and the new Jenkins adds a more secure 
HTTP headers:
   
   ```
   < Content-Security-Policy: sandbox; default-src 'none'; img-src 'self'; 
style-src 'self';
   < X-WebKit-CSP: sandbox; default-src 'none'; img-src 'self'; style-src 
'self';
   ```
   
   IMHO the inline styles which are used in the current code are disabled:
   
   ```
   
   ```
   
   While it's not a production issue, we can move the custom styles to the css, 
to make it compatible with the jenkins.
   
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-4288
   
   ## How was this patch tested?
   
 1. If you remove inline `style` attributes , and do a `hugo serve` in 
`hadoop-hdds/docs`, you can see that the logo is too big.
 2. When you apply the patches (css based styles) the logo is fine, again.
   
   It is supposed to be compatible with the Jenkins as all the other css based 
styles are working.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-4288) the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17203905#comment-17203905
 ] 

Marton Elek commented on HDDS-4288:
---

Thanks to report this issue. I think it's not related to HDDS-4166, but related 
to the Jenkins migration of Apache INFRA. The new jenkins adds a more secure 
HTTP headers:

{code}
< Content-Security-Policy: sandbox; default-src 'none'; img-src 'self'; 
style-src 'self';
< X-WebKit-CSP: sandbox; default-src 'none'; img-src 'self'; style-src 'self';
{code}

IMHO the inline styles which are used in the current code are disabled:

{code}

{code}

While it's not a production issue, we can move the custom styles to the css, to 
make it compatible with the jenkins.

> the icon of hadoop-ozone is bigger than ever
> 
>
> Key: HDDS-4288
> URL: https://issues.apache.org/jira/browse/HDDS-4288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
> Environment: web : chrome /firefox /safari
>Reporter: Shiyou xin
>Assignee: Marton Elek
>Priority: Trivial
> Attachments: 1751601366944_.pic.jpg
>
>
> It could be a by-product of the introduction of the issue: 
> https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496643970



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
##
@@ -619,9 +619,9 @@ private void finalizePipeline(PipelineID pipelineId) throws 
IOException {
* @throws IOException
*/
   protected void destroyPipeline(Pipeline pipeline) throws IOException {
-pipelineFactory.close(pipeline.getType(), pipeline);
 // remove the pipeline from the pipeline manager
 removePipeline(pipeline.getId());
+pipelineFactory.close(pipeline.getType(), pipeline);

Review comment:
   If did not change,  the order maybe: 
pipelineFactory.close(pipeline.getType(), pipeline) -> datanode close pipeline 
-> scm create new pipeline -> removePipeline(pipeline.getId()), then leader 
distribution will not be balance.
   For example, on server S1, S3, S3, there are 3 pipelines: P1, P2, P3 with 
leader: S1, S2, S3.
   S1 .. S2 .. S3
   P1 .. P2 .. P3
   
   If close P3, and order maybe: pipelineFactory.close(pipeline.getType(), P3) 
-> datanode close P3 -> scm create new pipeline P4 -> removePipeline(P3). When 
create new pipeline P4, because P3 has not been removed, P4 will choose S1 as 
the leader, finally S1 has 2 leaders, but S3 has no leader.
   S1 .. S2 .. S3
   P1 .. P2
   P4
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-4288) the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek reassigned HDDS-4288:
-

Assignee: Marton Elek

> the icon of hadoop-ozone is bigger than ever
> 
>
> Key: HDDS-4288
> URL: https://issues.apache.org/jira/browse/HDDS-4288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
> Environment: web : chrome /firefox /safari
>Reporter: Shiyou xin
>Assignee: Marton Elek
>Priority: Trivial
> Attachments: 1751601366944_.pic.jpg
>
>
> It could be a by-product of the introduction of the issue: 
> https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4215) update freon doc.

2020-09-29 Thread Marton Elek (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marton Elek resolved HDDS-4215.
---
Target Version/s: 1.1.0
  Resolution: Fixed

> update freon doc.
> -
>
> Key: HDDS-4215
> URL: https://issues.apache.org/jira/browse/HDDS-4215
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: mingchao zhao
>Assignee: mingchao zhao
>Priority: Major
>  Labels: pull-request-available
>
> At present, the link to the Freon introduction document is 0.4.0, and now 1.0 
> has been released and the URL needs to be updated to 1.0



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek merged pull request #1403: HDDS-4215. update freon doc.

2020-09-29 Thread GitBox


elek merged pull request #1403:
URL: https://github.com/apache/hadoop-ozone/pull/1403


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] captainzmc commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-29 Thread GitBox


captainzmc commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r496662068



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -441,6 +431,8 @@ public void createBucket(
 verifyVolumeName(volumeName);
 verifyBucketName(bucketName);
 Preconditions.checkNotNull(bucketArgs);
+verifyCountsQuota(bucketArgs.getQuotaInCounts());
+verifySpaceQuota(bucketArgs.getQuotaInBytes());

Review comment:
   The PR of createVolume did not add a checkmark before, I will add this 
in the createVolume.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496627193



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -121,18 +158,24 @@ public Pipeline create(ReplicationFactor factor) throws 
IOException {
   throw new IllegalStateException("Unknown factor: " + factor.name());
 }
 
+DatanodeDetails suggestedLeader = leaderChoosePolicy.chooseLeader(

Review comment:
   LeaderChoosePolicy is an interface, define member in interface is not 
common.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496643970



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
##
@@ -619,9 +619,9 @@ private void finalizePipeline(PipelineID pipelineId) throws 
IOException {
* @throws IOException
*/
   protected void destroyPipeline(Pipeline pipeline) throws IOException {
-pipelineFactory.close(pipeline.getType(), pipeline);
 // remove the pipeline from the pipeline manager
 removePipeline(pipeline.getId());
+pipelineFactory.close(pipeline.getType(), pipeline);

Review comment:
   If did not change,  the order maybe: 
pipelineFactory.close(pipeline.getType(), pipeline) -> datanode close pipeline 
-> scm create new pipeline -> removePipeline(pipeline.getId()), then leader 
distribution will not be balance.
   For example, on server S1, S3, S3, there are 3 pipelines: P1, P2, P3 with 
leader: S1, S2, S3.
   S1 .. S2 .. S3
   P1 .. P2 .. P3
   
   If close P3, and order maybe: pipelineFactory.close(pipeline.getType(), P3) 
-> datanode close P3 -> scm create new pipeline P4 -> removePipeline(P3). When 
create new pipeline P4, because P3 has not been removed, P4 will choose S1 as 
the leader, finally S1 has 2 leaders, but S3 has no leader.
   S1 .. S2 .. S3
   P1 .. P2
   P4
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi merged pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-29 Thread GitBox


ChenSammi merged pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496643970



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
##
@@ -619,9 +619,9 @@ private void finalizePipeline(PipelineID pipelineId) throws 
IOException {
* @throws IOException
*/
   protected void destroyPipeline(Pipeline pipeline) throws IOException {
-pipelineFactory.close(pipeline.getType(), pipeline);
 // remove the pipeline from the pipeline manager
 removePipeline(pipeline.getId());
+pipelineFactory.close(pipeline.getType(), pipeline);

Review comment:
   If did not change,  the order maybe: 
pipelineFactory.close(pipeline.getType(), pipeline) -> datanode close pipeline 
-> scm create new pipeline -> removePipeline(pipeline.getId()), then leader 
distribution will not be balance.
   For example, on server S1, S3, S3, there are 3 pipelines: P1, P2, P3 with 
leader: S1, S2, S3.
   S1 .. S2 .. S3
   P1 .. P2 .. P3
   
   If close P3, and order maybe: pipelineFactory.close(pipeline.getType(), P3) 
-> datanode close P3 -> scm create new pipeline P4 -> removePipeline(P3). When 
create new pipeline P4, because P3 has not been removed, P4 will choose S1 as 
the leader, then S1 has 2 leaders, but S3 has no leader.
   S1 .. S2 .. S3
   P1 .. P2
   P4
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-29 Thread GitBox


ChenSammi commented on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-700642125


   Thanks @sodonnel  and @linyiqun for the review. 
   
   Basically I think report handler is not a good place to handle all the empty 
container deletion process.  It can tell which one is empty , but it lacks of 
the facilities in ReplicationManager, such as inflightDeletion, such as handle 
send command to extra replica for DELETING state container, or resend command.  
In future,  when container compation is considered, we can move this container 
deletion logic to be together. 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4232) Use single thread for KeyDeletingService

2020-09-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4232:
---
Labels:   (was: pull-request-available)

> Use single thread for KeyDeletingService
> 
>
> Key: HDDS-4232
> URL: https://issues.apache.org/jira/browse/HDDS-4232
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 1.1.0
>
>
> KeyDeletingService scan the keys from a particular rocksdb table and sends 
> deletion request to SCM. Every thread would scan the table and send deletion 
> requests. This can lead to multiple deletion request for a particular block. 
> There is currently no way to distribute the keys to be deleted amongst 
> multiple threads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-4232) Use single thread for KeyDeletingService

2020-09-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai resolved HDDS-4232.

Fix Version/s: 1.1.0
   Resolution: Fixed

> Use single thread for KeyDeletingService
> 
>
> Key: HDDS-4232
> URL: https://issues.apache.org/jira/browse/HDDS-4232
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> KeyDeletingService scan the keys from a particular rocksdb table and sends 
> deletion request to SCM. Every thread would scan the table and send deletion 
> requests. This can lead to multiple deletion request for a particular block. 
> There is currently no way to distribute the keys to be deleted amongst 
> multiple threads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1415: HDDS-4232. Use single thread for KeyDeletingService.

2020-09-29 Thread GitBox


adoroszlai merged pull request #1415:
URL: https://github.com/apache/hadoop-ozone/pull/1415


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1415: HDDS-4232. Use single thread for KeyDeletingService.

2020-09-29 Thread GitBox


adoroszlai commented on pull request #1415:
URL: https://github.com/apache/hadoop-ozone/pull/1415#issuecomment-700638453


   Thanks @lokeshj1703 for the fix and @amaliujia and @bharatviswa504 for the 
review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1407: HDDS-4158. Provide a class type for Java based configuration

2020-09-29 Thread GitBox


adoroszlai commented on a change in pull request #1407:
URL: https://github.com/apache/hadoop-ozone/pull/1407#discussion_r496629283



##
File path: 
hadoop-hdds/common/src/test/java/org/apache/hadoop/hdds/conf/SimpleConfiguration.java
##
@@ -93,4 +98,12 @@ public int getPort() {
   public long getWaitTime() {
 return waitTime;
   }
+
+  public Class getMyClass() {

Review comment:
   Nit: this can be simplified to:
   
   ```suggestion
 public Class getMyClass() {
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496628424



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java
##
@@ -48,16 +52,37 @@ public CreatePipelineCommand(final PipelineID pipelineID,
 this.factor = factor;
 this.type = type;
 this.nodelist = datanodeList;
+if (datanodeList.size() ==
+XceiverServerRatis.DEFAULT_PRIORITY_LIST.size()) {

Review comment:
   always new is a waste.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496628148



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
##
@@ -115,6 +116,9 @@
   private static final Logger LOG = LoggerFactory
   .getLogger(XceiverServerRatis.class);
   private static final AtomicLong CALL_ID_COUNTER = new AtomicLong();
+  public static final List DEFAULT_PRIORITY_LIST =

Review comment:
   why always create a new list ? The size of DEFAULT_PRIORITY_LIST is 3, 
most case the size of datanode list is 3, their size are equal.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496627193



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -121,18 +158,24 @@ public Pipeline create(ReplicationFactor factor) throws 
IOException {
   throw new IllegalStateException("Unknown factor: " + factor.name());
 }
 
+DatanodeDetails suggestedLeader = leaderChoosePolicy.chooseLeader(

Review comment:
   LeaderChoosePolicy is an interface, define member in interface is not 
common.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] runzhiwang commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


runzhiwang commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496624353



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
##
@@ -59,7 +59,8 @@ void addContainerToPipeline(PipelineID pipelineId, 
ContainerID containerID)
 pipelineStateMap.addContainerToPipeline(pipelineId, containerID);
   }
 
-  Pipeline getPipeline(PipelineID pipelineID) throws PipelineNotFoundException 
{
+  public Pipeline getPipeline(PipelineID pipelineID)

Review comment:
   This is necessary, otherwise we can not call `getPipeline` in 
`MinLeaderCountChoosePolicy`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #1412: HDDS-3751. Ozone sh client support bucket quota option.

2020-09-29 Thread GitBox


adoroszlai commented on a change in pull request #1412:
URL: https://github.com/apache/hadoop-ozone/pull/1412#discussion_r496617299



##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -441,6 +431,8 @@ public void createBucket(
 verifyVolumeName(volumeName);
 verifyBucketName(bucketName);
 Preconditions.checkNotNull(bucketArgs);
+verifyCountsQuota(bucketArgs.getQuotaInCounts());
+verifySpaceQuota(bucketArgs.getQuotaInBytes());

Review comment:
   `createBucket` verifies quota args, but `createVolume` does not.  Is 
this intentional?

##
File path: 
hadoop-ozone/client/src/main/java/org/apache/hadoop/ozone/client/rpc/RpcClient.java
##
@@ -305,8 +300,8 @@ public void createVolume(String volumeName, VolumeArgs 
volArgs)
 builder.setVolume(volumeName);
 builder.setAdminName(admin);
 builder.setOwnerName(owner);
-builder.setQuotaInBytes(quotaInBytes);
-builder.setQuotaInCounts(quotaInCounts);
+builder.setQuotaInBytes(getQuotaValue(volArgs.getQuotaInBytes()));
+builder.setQuotaInCounts(getQuotaValue(volArgs.getQuotaInCounts()));

Review comment:
   Arguments are already checked, why change from use of the variables to 
`getQuotaValue`?
   
   ```suggestion
   builder.setQuotaInBytes(quotaInBytes);
   builder.setQuotaInCounts(quotaInCounts);
   ```

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/ClearQuotaHandler.java
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *  http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.shell.bucket;
+
+import org.apache.hadoop.ozone.client.OzoneBucket;
+import org.apache.hadoop.ozone.client.OzoneClient;
+import org.apache.hadoop.ozone.shell.OzoneAddress;
+import org.apache.hadoop.ozone.shell.ClearSpaceQuotaOptions;
+import picocli.CommandLine;
+import picocli.CommandLine.Command;
+
+import java.io.IOException;
+
+/**
+ * clean quota of the bucket.
+ */
+@Command(name = "clrquota",
+description = "clear quota of the bucket")
+public class ClearQuotaHandler extends BucketHandler {
+
+  @CommandLine.Mixin
+  private ClearSpaceQuotaOptions clrSpaceQuota;
+
+  @CommandLine.Option(names = {"--key-quota"},
+  description = "clear count quota")
+  private boolean clrKeyQuota;

Review comment:
   By using a bit more generic option name `--count-quota`, this could be 
moved into `ClearSpaceQuotaOptions` (and unified with volume's `--bucket-quota` 
option).

##
File path: 
hadoop-ozone/tools/src/main/java/org/apache/hadoop/ozone/shell/bucket/CreateBucketHandler.java
##
@@ -46,6 +49,13 @@
   "false/unspecified indicates otherwise")
   private Boolean isGdprEnforced;
 
+  @CommandLine.Mixin
+  private SetSpaceQuotaOptions quotaOptions;
+
+  @Option(names = {"--key-quota"},
+  description = "Key counts of the newly created bucket (eg. 5)")
+  private long quotaInCounts = OzoneConsts.QUOTA_RESET;

Review comment:
   Similarly to the option for `ClearQuotaHandler`:
   
   by using a bit more generic option name `--count-quota` (and description), 
this could be moved into `SetSpaceQuotaOptions` (and unified with volume's 
`--bucket-quota` option).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun edited a comment on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-29 Thread GitBox


linyiqun edited a comment on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-700598266


   > @linyiqun I do agree that I think this could be handled more cleanly and 
efficiently in the container report handler. However its probably not much of 
an overhead for replication manager. I am happy for us to commit the change as 
it is, and we can see how it performs in practice. Worst case we have to 
refactor the change out of RM into the report handler. What do you think?
   
   +1 for this, @sodonnel .
   
   @ChenSammi , can you add a TODO comment like below while committing this PR? 
That will be helpful for us to revisit this in the future.
   // TODO: container report handling the empty containers.
   if (isContainerEmpty(container, replicas)) {
  deleteContainerReplicas(container, replicas);
   }
   
   +1 from me.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-29 Thread GitBox


linyiqun commented on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-700598266


   > @linyiqun I do agree that I think this could be handled more cleanly and 
efficiently in the container report handler. However its probably not much of 
an overhead for replication manager. I am happy for us to commit the change as 
it is, and we can see how it performs in practice. Worst case we have to 
refactor the change out of RM into the report handler. What do you think?
   
   +1 for this, @sodonnel .
   
   @ChenSammi , can you add a TODO comment while committing like below? That 
will be helpful for us to revisit this in the future.
   // TODO: container report handling the empty containers.
   if (isContainerEmpty(container, replicas)) {
  deleteContainerReplicas(container, replicas);
   }
   
   +1 from me.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-4289) Throw exception from hadoop2 filesystem jar in HA environment

2020-09-29 Thread Marton Elek (Jira)
Marton Elek created HDDS-4289:
-

 Summary: Throw exception from hadoop2 filesystem jar in HA 
environment
 Key: HDDS-4289
 URL: https://issues.apache.org/jira/browse/HDDS-4289
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: OM HA
Reporter: Marton Elek


Thanks for Tamas Pleszkan for reporting this problem.

ozone-filesystem-hadoop2 doesn't support OM-HA (today) as the used 
Hadoop3OmTransport uses FailoverProxyProvider which is not available in hadoop2.

Long-term we need a custom failover mechanism, but this jira suggests to 
improve the error handling. `Hadoop27OmTransportFactory` should throw an 
exception if HA is used.

Used command:

{code}
spark-submit --master yarn --deploy-mode client --executor-memory 1g --conf 
"spark.yarn.access.hadoopFileSystems=o3fs://bucket.hdfs.ozone1/" --jars 
"/opt/cloudera/parcels/CDH-7.1.3-1.cdh7.1.3.p0.4992530/jars/hadoop-ozone-filesystem-hadoop2-0.5.0.7.1.3.0-100.jar"
 SparkWordCount.py o3fs://bucket.hdfs.ozone1/words 2
{code}

Current exception:

{code}
Caused by: 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ozone.om.exceptions.OMNotLeaderException):
 OM:om2 is not the leader. Suggested leader is OM:om1.
{code}

Expected exception: Unsupported operation exception with meaningful hint to use 
hadoop3 filesystem jar.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-29 Thread GitBox


sodonnel commented on pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#issuecomment-700582338


   > There is following logic in ReplicatioManager, which will handle the 
replicas reported during container state is DELETING.
   
   Sorry I missed that. You are correct. I am +1 on this change as it is now, 
so feel free to commit it.
   
   @linyiqun I do agree that I think this could be handled more cleanly and 
efficiently in the container report handler. However its probably not much of 
an overhead for replication manager. I am happy for us to commit the change as 
it is, and we can see how it performs in practice. Worst case we have to 
refactor the change out of RM into the report handler. What do you think?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] lokeshj1703 commented on pull request #1415: HDDS-4232. Use single thread for KeyDeletingService.

2020-09-29 Thread GitBox


lokeshj1703 commented on pull request #1415:
URL: https://github.com/apache/hadoop-ozone/pull/1415#issuecomment-700573211


   /ready



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4231) Background Service blocks on task results

2020-09-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4231:
---
Labels:   (was: pull-request-available)

> Background Service blocks on task results
> -
>
> Key: HDDS-4231
> URL: https://issues.apache.org/jira/browse/HDDS-4231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
> Fix For: 1.1.0
>
>
> Background service currently waits on the results of the tasks. The idea is 
> to track the time it took for the task to execute and log if task takes more 
> than configured timeout.
> This does not require waiting on the task results and can be achieved by just 
> comparing the execution time of a task with the timeout value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4231) Background Service blocks on task results

2020-09-29 Thread Attila Doroszlai (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Doroszlai updated HDDS-4231:
---
Fix Version/s: 1.1.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Background Service blocks on task results
> -
>
> Key: HDDS-4231
> URL: https://issues.apache.org/jira/browse/HDDS-4231
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Lokesh Jain
>Assignee: Lokesh Jain
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.1.0
>
>
> Background service currently waits on the results of the tasks. The idea is 
> to track the time it took for the task to execute and log if task takes more 
> than configured timeout.
> This does not require waiting on the task results and can be achieved by just 
> comparing the execution time of a task with the timeout value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] linyiqun commented on a change in pull request #1338: HDDS-4023. Delete closed container after all blocks have been deleted.

2020-09-29 Thread GitBox


linyiqun commented on a change in pull request #1338:
URL: https://github.com/apache/hadoop-ozone/pull/1338#discussion_r496544696



##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/container/ReplicationManager.java
##
@@ -320,6 +331,12 @@ private void processContainer(ContainerID id) {
* exact number of replicas in the same state.
*/
   if (isContainerHealthy(container, replicas)) {
+/*
+ *  If container is empty, schedule task to delete the container.
+ */
+if (isContainerEmpty(container, replicas)) {
+  deleteContainerReplicas(container, replicas);
+}

Review comment:
   @ChenSammi , Is there any specific reason that we let ReplicationManager 
to help clean empty containers?  After this, ReplicaManager will do 
additionally container empty check for all healthy containers. Not sure if this 
is an efficiency way to put logic here.
   >I wonder if it would be simpler to remove empty containers as part of 
Container Report processing? In 
AbstractContainerReportHandler#updateContainerState, we could check the size 
and number of keys of the reported containers in the CLOSED branch of the 
switch statement, and then take action to delete an empty container there? I 
have a feeling it might be simpler, but I am not sure. The disadvantage of 
doing it in the Container Report Processing, is that we are dealing with only a 
single replica at that stage. However if the container is CLOSED in SCM, and a 
report says it is empty then we should be good to simply remove the container 
from SCM and issue the delete container command when processing the container 
report.
   
   Actually I prefer this way as @sodonnel mentioned.
   >but I am not sure. The disadvantage of doing it in the Container Report 
Processing, is that we are dealing with only a single replica at that stage
   
   
   We could also get all replica info and check state in 
ContainerReportHandler, then send delete container command
   
   I'm okay for current way but just share my thought for this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on pull request #1414: HDDS-4231. Background Service blocks on task results.

2020-09-29 Thread GitBox


adoroszlai commented on pull request #1414:
URL: https://github.com/apache/hadoop-ozone/pull/1414#issuecomment-700548576


   Thanks @lokeshj1703 for the contribution and @amaliujia and @nandakumar131 
for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai merged pull request #1414: HDDS-4231. Background Service blocks on task results.

2020-09-29 Thread GitBox


adoroszlai merged pull request #1414:
URL: https://github.com/apache/hadoop-ozone/pull/1414


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


GlenGeng commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496533516



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java
##
@@ -39,6 +42,7 @@
   private final ReplicationFactor factor;
   private final ReplicationType type;
   private final List nodelist;
+  private final List priorityList;

Review comment:
   I consider that, should move `RatisPipelineProvider.getPriorityList()`, 
`HIGH_PRIORITY`, `LOW_PRIORITY` here, and replace priorityList in Ctor param as 
suggestedLeader.
   We can minimize the existence of `priorityList`, and the calculation logic 
of priority in `RatisPipelineProvider` is a little bit weird.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] GlenGeng commented on a change in pull request #1371: HDDS-2922. Balance ratis leader distribution in datanodes

2020-09-29 Thread GitBox


GlenGeng commented on a change in pull request #1371:
URL: https://github.com/apache/hadoop-ozone/pull/1371#discussion_r496496531



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java
##
@@ -48,16 +52,37 @@ public CreatePipelineCommand(final PipelineID pipelineID,
 this.factor = factor;
 this.type = type;
 this.nodelist = datanodeList;
+if (datanodeList.size() ==
+XceiverServerRatis.DEFAULT_PRIORITY_LIST.size()) {

Review comment:
   just forward to the Ctor with `priorityList `.
   
   ```
   this(pipelineID, factor, type, datanodeList,
   new ArrayList<>(Collections.nCopies(datanodeList.size(), 0))
   ```

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -98,8 +115,28 @@ private boolean exceedPipelineNumberLimit(ReplicationFactor 
factor) {
 return false;
   }
 
+  @VisibleForTesting
+  public LeaderChoosePolicy getLeaderChoosePolicy() {
+return leaderChoosePolicy;
+  }
+  private List getPriorityList(
+  List dns, DatanodeDetails suggestedLeader) {
+List priorityList = new ArrayList<>();
+
+for (DatanodeDetails dn : dns) {
+  if (dn.getUuid().equals(suggestedLeader.getUuid())) {

Review comment:
   why not use `DatanodeDetails.equals()` ?

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/PipelineStateManager.java
##
@@ -59,7 +59,8 @@ void addContainerToPipeline(PipelineID pipelineId, 
ContainerID containerID)
 pipelineStateMap.addContainerToPipeline(pipelineId, containerID);
   }
 
-  Pipeline getPipeline(PipelineID pipelineID) throws PipelineNotFoundException 
{
+  public Pipeline getPipeline(PipelineID pipelineID)

Review comment:
   revert unnecessary change.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/SCMPipelineManager.java
##
@@ -619,9 +619,9 @@ private void finalizePipeline(PipelineID pipelineId) throws 
IOException {
* @throws IOException
*/
   protected void destroyPipeline(Pipeline pipeline) throws IOException {
-pipelineFactory.close(pipeline.getType(), pipeline);
 // remove the pipeline from the pipeline manager
 removePipeline(pipeline.getId());
+pipelineFactory.close(pipeline.getType(), pipeline);

Review comment:
   why need this change ?  The sequence of sending SCMCommand and removing 
state may affect SCM HA.

##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/protocol/commands/CreatePipelineCommand.java
##
@@ -39,6 +42,7 @@
   private final ReplicationFactor factor;
   private final ReplicationType type;
   private final List nodelist;
+  private final List priorityList;

Review comment:
   I consider that, should move `RatisPipelineProvider.getPriorityList()`, 
`HIGH_PRIORITY`, `LOW_PRIORITY` here, and replace priorityList in Ctor param as 
suggestedLeader.
   We can minimize the existence of `priorityList`.

##
File path: 
hadoop-hdds/server-scm/src/main/java/org/apache/hadoop/hdds/scm/pipeline/RatisPipelineProvider.java
##
@@ -121,18 +158,24 @@ public Pipeline create(ReplicationFactor factor) throws 
IOException {
   throw new IllegalStateException("Unknown factor: " + factor.name());
 }
 
+DatanodeDetails suggestedLeader = leaderChoosePolicy.chooseLeader(

Review comment:
   make node manager and pipeline manager be member of leaderChoosePolicy, 
so that leaderChoosePolicy can has it own state, which will make future 
extension easier.

##
File path: 
hadoop-hdds/common/src/main/java/org/apache/hadoop/hdds/scm/pipeline/Pipeline.java
##
@@ -123,6 +126,15 @@ public Instant getCreationTimestamp() {
 return creationTimestamp;
   }
 
+  /**
+   * Return the suggested leaderId with high priority of pipeline.

Review comment:
   Return the suggested leaderId which has a high priority among DNs of the 
pipeline.

##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/transport/server/ratis/XceiverServerRatis.java
##
@@ -115,6 +116,9 @@
   private static final Logger LOG = LoggerFactory
   .getLogger(XceiverServerRatis.class);
   private static final AtomicLong CALL_ID_COUNTER = new AtomicLong();
+  public static final List DEFAULT_PRIORITY_LIST =

Review comment:
   why we need this `DEFAULT_PRIORITY_LIST` ? 
   At all the locations it appears, it dn number does not equal to it, you 
always create a new list.
   Suggest to remove this static var.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact 

[jira] [Created] (HDDS-4288) the icon of hadoop-ozone is begger than ever

2020-09-29 Thread Shiyou xin (Jira)
Shiyou xin created HDDS-4288:


 Summary: the icon of hadoop-ozone is begger than ever
 Key: HDDS-4288
 URL: https://issues.apache.org/jira/browse/HDDS-4288
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: documentation
Affects Versions: 1.0.0
 Environment: web : chrome /firefox /safari
Reporter: Shiyou xin
 Attachments: 1751601366944_.pic.jpg

It could be a by-product of the introduction of the issue: 
https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-4288) the icon of hadoop-ozone is bigger than ever

2020-09-29 Thread Shiyou xin (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shiyou xin updated HDDS-4288:
-
Summary: the icon of hadoop-ozone is bigger than ever  (was: the icon of 
hadoop-ozone is begger than ever)

> the icon of hadoop-ozone is bigger than ever
> 
>
> Key: HDDS-4288
> URL: https://issues.apache.org/jira/browse/HDDS-4288
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.0.0
> Environment: web : chrome /firefox /safari
>Reporter: Shiyou xin
>Priority: Trivial
> Attachments: 1751601366944_.pic.jpg
>
>
> It could be a by-product of the introduction of the issue: 
> https://issues.apache.org/jira/browse/HDDS-4166



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >