[jira] [Created] (HDDS-3616) TestReadRetries fails intermittently

2020-05-18 Thread Mukul Kumar Singh (Jira)
Mukul Kumar Singh created HDDS-3616:
---

 Summary: TestReadRetries fails intermittently
 Key: HDDS-3616
 URL: https://issues.apache.org/jira/browse/HDDS-3616
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: test
Reporter: Mukul Kumar Singh


https://pipelines.actions.githubusercontent.com/O2JnqO9VD7g40zn5cnS8ZQ3t6IkBNUXf9dtdL8xzpLtWhEsOm2/_apis/pipelines/1/runs/5432/signedlogcontent/60?urlExpires=2020-05-19T05%3A32%3A06.8423163Z=HMACV1=hNF%2FReA790wqujgUcx%2BNJnK3SfMS912rrbszJHQHE%2BQ%3D



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] umamaheswararao commented on pull request #927: HDDS-3597. using protobuf maven plugin instead of the legacy protoc executable file

2020-05-18 Thread GitBox


umamaheswararao commented on pull request #927:
URL: https://github.com/apache/hadoop-ozone/pull/927#issuecomment-630587680


   Thanks @maobaolong and @vinayakumarb for doing this. Big +1 
   :-)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3603) Generate 2.5.0 protobuf classes using protobuf-maven-plugin

2020-05-18 Thread Vinayakumar B (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B resolved HDDS-3603.
-
Resolution: Duplicate

> Generate 2.5.0 protobuf classes using protobuf-maven-plugin
> ---
>
> Key: HDDS-3603
> URL: https://issues.apache.org/jira/browse/HDDS-3603
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: pull-request-available
>
> Generate the protobuf 2.5.0 classe using  protobuf-maven plugin, which 
> dynamically downloads the required protobuf executable, than depending on 
> locally installed protoc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-3604) Support for Hadoop-3.3

2020-05-18 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110844#comment-17110844
 ] 

Vinayakumar B edited comment on HDDS-3604 at 5/19/20, 4:39 AM:
---

{quote}Can you please explain this? Based on my understanding Hadoop started to 
shade protobuf AND bumped the protobuf version to a 3.x. Why do we need to use 
2.5 after an upgrade?
{quote}
This is to avoid downstream's classpath/compile breakage due to Protobuf 3.x 
jar's incompatibility with earlier version generated Java files. So hadoop's 
internal usage of protobuf was upgraded, shaded and brought back the old 
version as just for dependency.

But now,to support Hadoop rpc with earlier version of Protobuf, this Jira was 
required.

 
{quote}Is it possible to use Hadoop RPC with any 3.x version of protobuf? Or 
only with the shaded one?
{quote}
Unfortunately, Hadoop RPC with shaded protobuf 3.x supports only shaded 
protobuf, unless some solution is available for that as well.

HADOOP-17046 is proposed to bring back the support of Protobuf 2.5.0 support 
for Hadoop RPC for downstreams, until they upgrade fully to shaded protobuf.


was (Author: vinayrpet):
{quote}Can you please explain this? Based on my understanding Hadoop started to 
shade protobuf AND bumped the protobuf version to a 3.x. Why do we need to use 
2.5 after an upgrade?
{quote}
This is to avoid downstream's classpath/compile breakage due to Protobuf 3.x 
jar's incompatibility with earlier version generated Java files. So hadoop's 
internal usage of protobuf was upgraded, shaded and brought back the old 
version as just for dependency.

But now,to support Hadoop rpc with earlier version of Protobuf, this Jira was 
required.

 
{quote}Is it possible to use Hadoop RPC with any 3.x version of protobuf? Or 
only with the shaded one?
{quote}
Unfortunately, Hadoop RPC with shaded protobuf 3.x supports only shaded 
protobuf, unless some solution is available for that as well.

> Support for Hadoop-3.3
> --
>
> Key: HDDS-3604
> URL: https://issues.apache.org/jira/browse/HDDS-3604
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: pull-request-available
>
> Hadoop-3.3 will be released soon, which brings the most important and long 
> awaited Protobuf upgrade to 3.7, by shading the internal protobuf classes in 
> Hadoop-thirdparty library, still keeping the protobuf-2.5.0 as a transitive 
> dependency.
> Unfortunately, There are direct usages of Hadoop's internal protobuf classes. 
> Because of this, ozone breaks after upgrading hadoop dependency to 3.3.0
> This Jira intends to do avoid such direct usages of hadoop's protobuf classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #930: HDDS-3601. Refactor TestOzoneManagerHA.java into multiple tests to avoid frequent timeout issues.

2020-05-18 Thread GitBox


bharatviswa504 commented on pull request #930:
URL: https://github.com/apache/hadoop-ozone/pull/930#issuecomment-630574490


   LGTM.
   
   One question I have is we have a time out rule of 5 minutes, this is 
applicable for each test right? Splitting it to multiple tests how will it 
solve the issue?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3604) Support for Hadoop-3.3

2020-05-18 Thread Vinayakumar B (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110844#comment-17110844
 ] 

Vinayakumar B commented on HDDS-3604:
-

{quote}Can you please explain this? Based on my understanding Hadoop started to 
shade protobuf AND bumped the protobuf version to a 3.x. Why do we need to use 
2.5 after an upgrade?
{quote}
This is to avoid downstream's classpath/compile breakage due to Protobuf 3.x 
jar's incompatibility with earlier version generated Java files. So hadoop's 
internal usage of protobuf was upgraded, shaded and brought back the old 
version as just for dependency.

But now,to support Hadoop rpc with earlier version of Protobuf, this Jira was 
required.

 
{quote}Is it possible to use Hadoop RPC with any 3.x version of protobuf? Or 
only with the shaded one?
{quote}
Unfortunately, Hadoop RPC with shaded protobuf 3.x supports only shaded 
protobuf, unless some solution is available for that as well.

> Support for Hadoop-3.3
> --
>
> Key: HDDS-3604
> URL: https://issues.apache.org/jira/browse/HDDS-3604
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>  Labels: pull-request-available
>
> Hadoop-3.3 will be released soon, which brings the most important and long 
> awaited Protobuf upgrade to 3.7, by shading the internal protobuf classes in 
> Hadoop-thirdparty library, still keeping the protobuf-2.5.0 as a transitive 
> dependency.
> Unfortunately, There are direct usages of Hadoop's internal protobuf classes. 
> Because of this, ozone breaks after upgrading hadoop dependency to 3.3.0
> This Jira intends to do avoid such direct usages of hadoop's protobuf classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-3493) Refactor Failures in MiniOzoneChaosCluster into pluggable model.

2020-05-18 Thread Mukul Kumar Singh (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh resolved HDDS-3493.
-
Fix Version/s: 0.6.0
   Resolution: Fixed

Thanks for the reviews [~adoroszlai]. I have merged this to master.

> Refactor Failures in MiniOzoneChaosCluster into pluggable model.
> 
>
> Key: HDDS-3493
> URL: https://issues.apache.org/jira/browse/HDDS-3493
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: test
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> Refactor Failures in MiniOzoneChaosCluster into pluggable model, so that more 
> failures can be added later.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 merged pull request #874: HDDS-3493. Refactor Failures in MiniOzoneChaosCluster into pluggable model.

2020-05-18 Thread GitBox


mukul1987 merged pull request #874:
URL: https://github.com/apache/hadoop-ozone/pull/874


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vinayakumarb commented on pull request #932: HDDS-3603. Generate 2.5.0 protobuf classes using protobuf-maven-plugin

2020-05-18 Thread GitBox


vinayakumarb commented on pull request #932:
URL: https://github.com/apache/hadoop-ozone/pull/932#issuecomment-630572025


   Thanks @elek  for reviews anyway.
   I wil close this as duplicate



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] vinayakumarb closed pull request #932: HDDS-3603. Generate 2.5.0 protobuf classes using protobuf-maven-plugin

2020-05-18 Thread GitBox


vinayakumarb closed pull request #932:
URL: https://github.com/apache/hadoop-ozone/pull/932


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi merged pull request #922: HDDS-3588. Fix NPE while getPipelines if absent in query2OpenPipelines

2020-05-18 Thread GitBox


ChenSammi merged pull request #922:
URL: https://github.com/apache/hadoop-ozone/pull/922


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #922: HDDS-3588. Fix NPE while getPipelines if absent in query2OpenPipelines

2020-05-18 Thread GitBox


ChenSammi commented on pull request #922:
URL: https://github.com/apache/hadoop-ozone/pull/922#issuecomment-630565893


   +1.  Thanks @avijayanhwx for the review and @maobaolong  for fix the issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2939) Ozone FS namespace

2020-05-18 Thread Sammi Chen (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110822#comment-17110822
 ] 

Sammi Chen commented on HDDS-2939:
--

Hi [~msingh], thanks for the explanation.  So will the File table and Object 
table coexist after this namespace feature completed?  Or do we plan to 
fundamentally switch the current internal flat object store layout to 
hierarchic dir/file layout?  

> Ozone FS namespace
> --
>
> Key: HDDS-2939
> URL: https://issues.apache.org/jira/browse/HDDS-2939
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Reporter: Supratim Deka
>Assignee: Rakesh Radhakrishnan
>Priority: Major
> Attachments: Ozone FS Namespace Proposal v1.0.docx
>
>
> Create the structures and metadata layout required to support efficient FS 
> namespace operations in Ozone - operations involving folders/directories 
> required to support the Hadoop compatible Filesystem interface.
> The details are described in the attached document. The work is divided up 
> into sub-tasks as per the task list in the document.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #940: HDDS-3614. Remove S3Table from OmMetadataManager.

2020-05-18 Thread GitBox


bharatviswa504 commented on pull request #940:
URL: https://github.com/apache/hadoop-ozone/pull/940#issuecomment-630555974


   Thank You @ChenSammi for the review.
   Addressed review comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3615) Call cleanup on tables only when double buffer has transactions related to tables.

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3615:
-
Labels: pull-request-available  (was: )

> Call cleanup on tables only when double buffer has transactions related to 
> tables.
> --
>
> Key: HDDS-3615
> URL: https://issues.apache.org/jira/browse/HDDS-3615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> For volume/bucket table currently it is full cache, and we need to cleanup 
> entries only when they are marked for delete.
> So, it is unnecessary to call cleanup and waste the CPU resources on OM.
> Similarly for other tables, when the double buffer has transaction entries 
> that touch those tables, then only call cleanup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #943: HDDS-3615. Call cleanup on tables only when double buffer has transactions related to tables.

2020-05-18 Thread GitBox


bharatviswa504 opened a new pull request #943:
URL: https://github.com/apache/hadoop-ozone/pull/943


   Call cleanup on tables only when the double buffer has transactions related 
to tables.
   
   ## What changes were proposed in this pull request?
   
   (Please fill in changes proposed in this fix)
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3615
   
   ## How was this patch tested?
   
   Existing tests. Will see if I can add any tests.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #940: HDDS-3614. Remove S3Table from OmMetadataManager.

2020-05-18 Thread GitBox


ChenSammi commented on pull request #940:
URL: https://github.com/apache/hadoop-ozone/pull/940#issuecomment-630552520


   @bharatviswa504,  the patch overall looks good. 
   The static final string "S3_TABLE" and it's usage in the 
addOMTablesAndCodecs is missed. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2556) Handle InterruptedException in BlockOutputStream

2020-05-18 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2556:

Status: Patch Available  (was: Open)

> Handle InterruptedException in BlockOutputStream
> 
>
> Key: HDDS-2556
> URL: https://issues.apache.org/jira/browse/HDDS-2556
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>
> Fix these 5 instances
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi edited a comment on pull request #934: HDDS-3605. Support close all pipelines.

2020-05-18 Thread GitBox


ChenSammi edited a comment on pull request #934:
URL: https://github.com/apache/hadoop-ozone/pull/934#issuecomment-630550300


   Hi, @maobaolong, can you explain in what situation this command is useful?  
This command is too  powerful,  how can we prevent user from mis-operation? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] dineshchitlangia opened a new pull request #942: HDDS-2556. Handle InterruptedException in BlockOutputStream

2020-05-18 Thread GitBox


dineshchitlangia opened a new pull request #942:
URL: https://github.com/apache/hadoop-ozone/pull/942


   ## What changes were proposed in this pull request?
   Address sonar violations to handle InterruptedException appropriately.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-2556
   
   ## How was this patch tested?
   Clean build, unit tests



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2556) Handle InterruptedException in BlockOutputStream

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2556:
-
Labels: newbie pull-request-available sonar  (was: newbie sonar)

> Handle InterruptedException in BlockOutputStream
> 
>
> Key: HDDS-2556
> URL: https://issues.apache.org/jira/browse/HDDS-2556
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, pull-request-available, sonar
>
> Fix these 5 instances
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on pull request #934: HDDS-3605. Support close all pipelines.

2020-05-18 Thread GitBox


ChenSammi commented on pull request #934:
URL: https://github.com/apache/hadoop-ozone/pull/934#issuecomment-630550300


   Hi, @maobaolong, can you explain in what situation this command is useful? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] prashantpogde commented on pull request #917: Hdds 3001

2020-05-18 Thread GitBox


prashantpogde commented on pull request #917:
URL: https://github.com/apache/hadoop-ozone/pull/917#issuecomment-630547614


   thanks, I will create smaller patches and upload them.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #935: HDDS-3606. Add datanode port into the printTopology command output

2020-05-18 Thread GitBox


maobaolong commented on pull request #935:
URL: https://github.com/apache/hadoop-ozone/pull/935#issuecomment-630537961


   @ChenSammi Thanks for correct me, sorry for my careless, i've address your 
comments, PTAL.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #927: HDDS-3597. using protobuf maven plugin instead of the legacy protoc executable file

2020-05-18 Thread GitBox


maobaolong commented on pull request #927:
URL: https://github.com/apache/hadoop-ozone/pull/927#issuecomment-630529069


   @elek Updated the `CONTRIBUTING.md` now, Bye bye `protoc 2.5`, thanks for 
your review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3615) Call cleanup on tables only when double buffer has transactions related to tables.

2020-05-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3615:
-
Summary: Call cleanup on tables only when double buffer has transactions 
related to tables.  (was: Call cleanup for volume/bucket only when delete 
operations)

> Call cleanup on tables only when double buffer has transactions related to 
> tables.
> --
>
> Key: HDDS-3615
> URL: https://issues.apache.org/jira/browse/HDDS-3615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> For volume/bucket table currently it is full cache, and we need to cleanup 
> entries only when they are marked for delete.
> So, it is unnecessary to call cleanup and waste the CPU resources on OM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3615) Call cleanup on tables only when double buffer has transactions related to tables.

2020-05-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3615:
-
Description: 
For volume/bucket table currently it is full cache, and we need to cleanup 
entries only when they are marked for delete.

So, it is unnecessary to call cleanup and waste the CPU resources on OM.

Similarly for other tables, when the double buffer has transaction entries that 
touch those tables, then only call cleanup.

  was:
For volume/bucket table currently it is full cache, and we need to cleanup 
entries only when they are marked for delete.

So, it is unnecessary to call cleanup and waste the CPU resources on OM.


> Call cleanup on tables only when double buffer has transactions related to 
> tables.
> --
>
> Key: HDDS-3615
> URL: https://issues.apache.org/jira/browse/HDDS-3615
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>
> For volume/bucket table currently it is full cache, and we need to cleanup 
> entries only when they are marked for delete.
> So, it is unnecessary to call cleanup and waste the CPU resources on OM.
> Similarly for other tables, when the double buffer has transaction entries 
> that touch those tables, then only call cleanup.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3574) Implement ofs://: Override getTrashRoot

2020-05-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3574:
-
Status: Patch Available  (was: In Progress)

> Implement ofs://: Override getTrashRoot
> ---
>
> Key: HDDS-3574
> URL: https://issues.apache.org/jira/browse/HDDS-3574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> [~pifta] found if we delete file with Hadoop shell, namely {{hadoop fs -rm}}, 
> without {{-skipTrash}} option, the operation would fail in OFS due to the 
> client is renaming the file to {{/user//.Trash/}} because renaming 
> across different buckets is not allowed in Ozone. (Unless the file happens to 
> be under {{/user//}}, apparently.)
> We could override {{getTrashRoot()}} in {{BasicOzoneFileSystem}} to a dir 
> under the same bucket to mitigate the problem. Thanks [~umamaheswararao] for 
> the suggestion.
> This raises one more problem though: We need to implement trash clean up on 
> OM. Opened HDDS-3575 for this.
> CC [~arp] [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3574) Implement ofs://: Override getTrashRoot

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3574:
-
Labels: pull-request-available  (was: )

> Implement ofs://: Override getTrashRoot
> ---
>
> Key: HDDS-3574
> URL: https://issues.apache.org/jira/browse/HDDS-3574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> [~pifta] found if we delete file with Hadoop shell, namely {{hadoop fs -rm}}, 
> without {{-skipTrash}} option, the operation would fail in OFS due to the 
> client is renaming the file to {{/user//.Trash/}} because renaming 
> across different buckets is not allowed in Ozone. (Unless the file happens to 
> be under {{/user//}}, apparently.)
> We could override {{getTrashRoot()}} in {{BasicOzoneFileSystem}} to a dir 
> under the same bucket to mitigate the problem. Thanks [~umamaheswararao] for 
> the suggestion.
> This raises one more problem though: We need to implement trash clean up on 
> OM. Opened HDDS-3575 for this.
> CC [~arp] [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl opened a new pull request #941: HDDS-3574. Implement ofs://: Override getTrashRoot

2020-05-18 Thread GitBox


smengcl opened a new pull request #941:
URL: https://github.com/apache/hadoop-ozone/pull/941


   ## What changes were proposed in this pull request?
   
   Implement `getTrashRoot()` in OFS so deleting to trash will always move 
files/dirs within the same bucket.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3574
   
   ## How was this patch tested?
   
   Added new test.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3615) Call cleanup for volume/bucket only when delete operations

2020-05-18 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-3615:


 Summary: Call cleanup for volume/bucket only when delete operations
 Key: HDDS-3615
 URL: https://issues.apache.org/jira/browse/HDDS-3615
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


For volume/bucket table currently it is full cache, and we need to cleanup 
entries only when they are marked for delete.

So, it is unnecessary to call cleanup and waste the CPU resources on OM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3614) Remove S3Table from OmMetadataManager

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3614:
-
Labels: pull-request-available  (was: )

> Remove S3Table from OmMetadataManager
> -
>
> Key: HDDS-3614
> URL: https://issues.apache.org/jira/browse/HDDS-3614
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Remove the S3 table, after HDDS-3385, we don't have any use case for S3Table. 
> We can remove this table from OmMetadataManager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #940: HDDS-3614. Remove S3Table from OmMetadataManager.

2020-05-18 Thread GitBox


bharatviswa504 opened a new pull request #940:
URL: https://github.com/apache/hadoop-ozone/pull/940


   ## What changes were proposed in this pull request?
   
   Remove S3Table from OmMetadataManager, which are not used any more after 
HDDS-3385.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3614
   
   ## How was this patch tested?
   
   Cleanup Jira.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3614) Remove S3Table from OmMetadataManager

2020-05-18 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-3614:


 Summary: Remove S3Table from OmMetadataManager
 Key: HDDS-3614
 URL: https://issues.apache.org/jira/browse/HDDS-3614
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Remove the S3 table, after HDDS-3385, we don't have any use case for S3Table. 
We can remove this table from OmMetadataManager.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #921: HDDS-3583. Loosen some rule restrictions of checkstyle

2020-05-18 Thread GitBox


maobaolong commented on pull request #921:
URL: https://github.com/apache/hadoop-ozone/pull/921#issuecomment-630488842


   @sodonnel How about the result of the +1's int the community sync call?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3613) Fix JVMPause monitor start in OzoneManager

2020-05-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3613:
-
Fix Version/s: 0.6.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Fix JVMPause monitor start in OzoneManager
> --
>
> Key: HDDS-3613
> URL: https://issues.apache.org/jira/browse/HDDS-3613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.6.0
>
>
> Fix JVMPause monitor logic, right now it is started only in restart.
> This should be started during OM start, and stopped during OM Stop. In 
> restart() we can start this again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #939: HDDS-3613. Fix JVMPause monitor start in OzoneManager.

2020-05-18 Thread GitBox


bharatviswa504 commented on pull request #939:
URL: https://github.com/apache/hadoop-ozone/pull/939#issuecomment-630484031


   Thank You @hanishakoneru for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 merged pull request #939: HDDS-3613. Fix JVMPause monitor start in OzoneManager.

2020-05-18 Thread GitBox


bharatviswa504 merged pull request #939:
URL: https://github.com/apache/hadoop-ozone/pull/939


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao merged pull request #923: HDDS-3391. Delegate admin ACL checks to Ozone authorizer plugin.

2020-05-18 Thread GitBox


xiaoyuyao merged pull request #923:
URL: https://github.com/apache/hadoop-ozone/pull/923


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #923: HDDS-3391. Delegate admin ACL checks to Ozone authorizer plugin.

2020-05-18 Thread GitBox


xiaoyuyao commented on pull request #923:
URL: https://github.com/apache/hadoop-ozone/pull/923#issuecomment-630483305


   Thanks all for the reviews. I will merge the PR shortly. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-1683) Update Ratis to 0.4.0-300d9c5-SNAPSHOT

2020-05-18 Thread Hanisha Koneru (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru resolved HDDS-1683.
--
Resolution: Abandoned

> Update Ratis to 0.4.0-300d9c5-SNAPSHOT
> --
>
> Key: HDDS-1683
> URL: https://issues.apache.org/jira/browse/HDDS-1683
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Major
>
> Update Ratis dependency to latest build - 0.4.0-300d9c5-SNAPSHOT



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3613) Fix JVMPause monitor start in OzoneManager

2020-05-18 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-3613:
-
Status: Patch Available  (was: In Progress)

> Fix JVMPause monitor start in OzoneManager
> --
>
> Key: HDDS-3613
> URL: https://issues.apache.org/jira/browse/HDDS-3613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Fix JVMPause monitor logic, right now it is started only in restart.
> This should be started during OM start, and stopped during OM Stop. In 
> restart() we can start this again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3613) Fix JVMPause monitor start in OzoneManager

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3613:
-
Labels: pull-request-available  (was: )

> Fix JVMPause monitor start in OzoneManager
> --
>
> Key: HDDS-3613
> URL: https://issues.apache.org/jira/browse/HDDS-3613
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>
> Fix JVMPause monitor logic, right now it is started only in restart.
> This should be started during OM start, and stopped during OM Stop. In 
> restart() we can start this again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 opened a new pull request #939: HDDS-3613. Fix JVMPause monitor start in OzoneManager.

2020-05-18 Thread GitBox


bharatviswa504 opened a new pull request #939:
URL: https://github.com/apache/hadoop-ozone/pull/939


   ## What changes were proposed in this pull request?
   
   Start JVM pause monitor during OM start up.
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3613
   
   
   ## How was this patch tested?
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3613) Fix JVMPause monitor start in OzoneManager

2020-05-18 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-3613:


 Summary: Fix JVMPause monitor start in OzoneManager
 Key: HDDS-3613
 URL: https://issues.apache.org/jira/browse/HDDS-3613
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham
Assignee: Bharat Viswanadham


Fix JVMPause monitor logic, right now it is started only in restart.

This should be started during OM start, and stopped during OM Stop. In 
restart() we can start this again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #930: HDDS-3601. Refactor TestOzoneManagerHA.java into multiple tests to avoid frequent timeout issues.

2020-05-18 Thread GitBox


xiaoyuyao commented on pull request #930:
URL: https://github.com/apache/hadoop-ozone/pull/930#issuecomment-630453051


   Agree this will help the timeout issue of individual tests. But we have a 
900s surefire timeout per test module. If they are still under the same module, 
we still might hit the surefire timeout as observed in latest #923 run.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] xiaoyuyao commented on pull request #930: HDDS-3601. Refactor TestOzoneManagerHA.java into multiple tests to avoid frequent timeout issues.

2020-05-18 Thread GitBox


xiaoyuyao commented on pull request #930:
URL: https://github.com/apache/hadoop-ozone/pull/930#issuecomment-630451909


   +1 from me too. Thanks for fixing this. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3596) Clean up unused code after HDDS-2940 and HDDS-2942

2020-05-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3596:
-
Description: 
It seems some snippets of code should be removed as HDDS-2940 is committed. 
Update: Pending HDDS-2942 commit before this can be committed.

For example 
[this|https://github.com/apache/hadoop-ozone/blob/ffb340e32460ccaa2eae557f0bb71fb90d7ebc7a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java#L495-L499]:
{code:java|title=BasicOzoneFileSystem#delete}
if (result) {
  // If this delete operation removes all files/directories from the
  // parent directory, then an empty parent directory must be created.
  createFakeParentDirectory(f);
}
{code}

(Found at https://github.com/apache/hadoop-ozone/pull/906#discussion_r424873030)

  was:
It seems some snippets of code should be removed as HDDS-2940 is committed.

For example 
[this|https://github.com/apache/hadoop-ozone/blob/ffb340e32460ccaa2eae557f0bb71fb90d7ebc7a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java#L495-L499]:
{code:java|title=BasicOzoneFileSystem#delete}
if (result) {
  // If this delete operation removes all files/directories from the
  // parent directory, then an empty parent directory must be created.
  createFakeParentDirectory(f);
}
{code}

(Found at https://github.com/apache/hadoop-ozone/pull/906#discussion_r424873030)


> Clean up unused code after HDDS-2940 and HDDS-2942
> --
>
> Key: HDDS-3596
> URL: https://issues.apache.org/jira/browse/HDDS-3596
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> It seems some snippets of code should be removed as HDDS-2940 is committed. 
> Update: Pending HDDS-2942 commit before this can be committed.
> For example 
> [this|https://github.com/apache/hadoop-ozone/blob/ffb340e32460ccaa2eae557f0bb71fb90d7ebc7a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java#L495-L499]:
> {code:java|title=BasicOzoneFileSystem#delete}
> if (result) {
>   // If this delete operation removes all files/directories from the
>   // parent directory, then an empty parent directory must be created.
>   createFakeParentDirectory(f);
> }
> {code}
> (Found at 
> https://github.com/apache/hadoop-ozone/pull/906#discussion_r424873030)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3596) Clean up unused code after HDDS-2940 and HDDS-2942

2020-05-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3596:
-
Summary: Clean up unused code after HDDS-2940 and HDDS-2942  (was: Clean up 
unused code after HDDS-2940)

> Clean up unused code after HDDS-2940 and HDDS-2942
> --
>
> Key: HDDS-3596
> URL: https://issues.apache.org/jira/browse/HDDS-3596
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> It seems some snippets of code should be removed as HDDS-2940 is committed.
> For example 
> [this|https://github.com/apache/hadoop-ozone/blob/ffb340e32460ccaa2eae557f0bb71fb90d7ebc7a/hadoop-ozone/ozonefs/src/main/java/org/apache/hadoop/fs/ozone/BasicOzoneFileSystem.java#L495-L499]:
> {code:java|title=BasicOzoneFileSystem#delete}
> if (result) {
>   // If this delete operation removes all files/directories from the
>   // parent directory, then an empty parent directory must be created.
>   createFakeParentDirectory(f);
> }
> {code}
> (Found at 
> https://github.com/apache/hadoop-ozone/pull/906#discussion_r424873030)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] arp7 commented on pull request #926: HDDS-3596. Clean up unused code after HDDS-2940 and HDDS-2942

2020-05-18 Thread GitBox


arp7 commented on pull request #926:
URL: https://github.com/apache/hadoop-ozone/pull/926#issuecomment-630447550


   /pending We may need to delay fixing this until 
https://issues.apache.org/jira/browse/HDDS-2942 is committed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] bharatviswa504 commented on pull request #915: HDDS-3563. Make s3GateWay s3v volume configurable.

2020-05-18 Thread GitBox


bharatviswa504 commented on pull request #915:
URL: https://github.com/apache/hadoop-ozone/pull/915#issuecomment-630445839


   Yes agreed, long term solution to solve this will be bind mount.
   
   If this PR solves temporarily for Tencent, I am fine with it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3589) Support running HBase on Ozone.

2020-05-18 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110624#comment-17110624
 ] 

Wei-Chiu Chuang commented on HDDS-3589:
---

Yes please make it a PR.

Some quick comments:

What is the version of HBase tested? How was it tested? Does HBase store HFiles 
on Ozone, or does it write WAL on Ozone?

{code}
public void hflush() throws IOException {
Thread.dumpStack();
{code}
The Thread.dumpStack() looks redundant. You want to remove it in production 
code.

The hasCapability() was added by HDFS-11644, Hadoop 2.9 and above. This would 
limit Ozone's Hadoop support.

> Support running HBase on Ozone.
> ---
>
> Key: HDDS-3589
> URL: https://issues.apache.org/jira/browse/HDDS-3589
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
> Attachments: Hflush_impl.patch
>
>
> The aim of this Jira is to support Hbase to run on top of Ozone. In order to 
> achieve this , the Syncable interface was implemented which contains the 
> hflush() API which basically commits an open key into OM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3574) Implement ofs://: Override getTrashRoot

2020-05-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3574:
-
Status: In Progress  (was: Patch Available)

> Implement ofs://: Override getTrashRoot
> ---
>
> Key: HDDS-3574
> URL: https://issues.apache.org/jira/browse/HDDS-3574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> [~pifta] found if we delete file with Hadoop shell, namely {{hadoop fs -rm}}, 
> without {{-skipTrash}} option, the operation would fail in OFS due to the 
> client is renaming the file to {{/user//.Trash/}} because renaming 
> across different buckets is not allowed in Ozone. (Unless the file happens to 
> be under {{/user//}}, apparently.)
> We could override {{getTrashRoot()}} in {{BasicOzoneFileSystem}} to a dir 
> under the same bucket to mitigate the problem. Thanks [~umamaheswararao] for 
> the suggestion.
> This raises one more problem though: We need to implement trash clean up on 
> OM. Opened HDDS-3575 for this.
> CC [~arp] [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3612) Allow mounting bucket under other volume

2020-05-18 Thread Attila Doroszlai (Jira)
Attila Doroszlai created HDDS-3612:
--

 Summary: Allow mounting bucket under other volume
 Key: HDDS-3612
 URL: https://issues.apache.org/jira/browse/HDDS-3612
 Project: Hadoop Distributed Data Store
  Issue Type: New Feature
  Components: Ozone Manager
Reporter: Attila Doroszlai
Assignee: Attila Doroszlai


Step 2 from S3 [volume mapping design 
doc|https://github.com/apache/hadoop-ozone/blob/master/hadoop-hdds/docs/content/design/ozone-volume-management.md#solving-the-mapping-problem-2-4-from-the-problem-listing]:

Implement a bind mount mechanic which makes it possible to mount any 
volume/buckets to the specific "s3" volume.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3499) Address compatibility issue by SCM DB instances change

2020-05-18 Thread Arpit Agarwal (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110543#comment-17110543
 ] 

Arpit Agarwal commented on HDDS-3499:
-

[~timmylicheng] can you confirm if this unblocks your upgrade? Please let us 
know if you need any more information.

> Address compatibility issue by SCM DB instances change
> --
>
> Key: HDDS-3499
> URL: https://issues.apache.org/jira/browse/HDDS-3499
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Li Cheng
>Assignee: Marton Elek
>Priority: Blocker
>
> After https://issues.apache.org/jira/browse/HDDS-3172, SCM now has one single 
> rocksdb instance instead of multiple db instances. 
> For running Ozone cluster, we need to address compatibility issues. One 
> possible way is to have a side-way tool to migrate old metadata from multiple 
> dbs to current single db.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #874: HDDS-3493. Refactor Failures in MiniOzoneChaosCluster into pluggable model.

2020-05-18 Thread GitBox


mukul1987 commented on a change in pull request #874:
URL: https://github.com/apache/hadoop-ozone/pull/874#discussion_r426821315



##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -406,19 +280,71 @@ public MiniOzoneChaosCluster build() throws IOException {
   final List hddsDatanodes = createHddsDatanodes(
   scm, null);
 
-  MiniOzoneChaosCluster cluster;
-  if (failureService == FailureService.DATANODE) {
-cluster = new MiniOzoneDatanodeChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  } else {
-cluster = new MiniOzoneOMChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  }
+  MiniOzoneChaosCluster cluster =
+  new MiniOzoneChaosCluster(conf, omList, scm, hddsDatanodes,
+  omServiceId, clazzes);
 
   if (startDataNodes) {
 cluster.startHddsDatanodes();
   }
   return cluster;
 }
   }
+
+  // OzoneManager specifc
+  public List omToFail() {
+int numNodesToFail = FailureManager.getNumberOfOmToFail();
+if (failedOmSet.size() >= numOzoneManagers/2) {
+  return Collections.emptyList();
+}
+
+int numOms = getOzoneManagersList().size();
+List oms = new ArrayList<>(numNodesToFail);
+for (int i = 0; i < numNodesToFail; i++) {
+  int failedNodeIndex = FailureManager.getBoundedRandomIndex(numOms);
+  oms.add(getOzoneManager(failedNodeIndex));
+}
+return oms;
+  }
+
+  public void shutdownOzoneManager(OzoneManager om) {
+super.shutdownOzoneManager(om);
+failedOmSet.add(om);
+  }
+
+  public void restartOzoneManager(OzoneManager om, boolean waitForOM)
+  throws IOException, TimeoutException, InterruptedException {
+super.restartOzoneManager(om, waitForOM);
+failedOmSet.remove(om);
+  }
+
+  // Should the selected node be stopped or started.
+  public boolean shouldStop() {
+if (failedOmSet.size() >= numOzoneManagers/2) {
+  return false;
+}
+return RandomUtils.nextBoolean();
+  }
+
+  public List dnToFail() {
+int numNodesToFail = FailureManager.getNumberOfDnToFail();
+int numDns = getHddsDatanodes().size();
+List dns = new ArrayList<>(numNodesToFail);
+for (int i = 0; i < numNodesToFail; i++) {
+  int failedNodeIndex = FailureManager.getBoundedRandomIndex(numDns);
+  dns.add(getHddsDatanodes().get(failedNodeIndex).getDatanodeDetails());
+}
+return dns;
+  }
+  
+  @Override
+  public void restartHddsDatanode(DatanodeDetails dn, boolean waitForDatanode)
+  throws InterruptedException, TimeoutException, IOException {
+super.restartHddsDatanode(dn, waitForDatanode);
+  }
+
+  @Override
+  public void shutdownHddsDatanode(int i) {
+super.shutdownHddsDatanode(i);
+  }

Review comment:
   Yes, they are getting this used now.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #874: HDDS-3493. Refactor Failures in MiniOzoneChaosCluster into pluggable model.

2020-05-18 Thread GitBox


adoroszlai commented on a change in pull request #874:
URL: https://github.com/apache/hadoop-ozone/pull/874#discussion_r426810821



##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -406,19 +280,71 @@ public MiniOzoneChaosCluster build() throws IOException {
   final List hddsDatanodes = createHddsDatanodes(
   scm, null);
 
-  MiniOzoneChaosCluster cluster;
-  if (failureService == FailureService.DATANODE) {
-cluster = new MiniOzoneDatanodeChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  } else {
-cluster = new MiniOzoneOMChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  }
+  MiniOzoneChaosCluster cluster =
+  new MiniOzoneChaosCluster(conf, omList, scm, hddsDatanodes,
+  omServiceId, clazzes);
 
   if (startDataNodes) {
 cluster.startHddsDatanodes();
   }
   return cluster;
 }
   }
+
+  // OzoneManager specifc
+  public List omToFail() {
+int numNodesToFail = FailureManager.getNumberOfOmToFail();
+if (failedOmSet.size() >= numOzoneManagers/2) {
+  return Collections.emptyList();
+}
+
+int numOms = getOzoneManagersList().size();
+List oms = new ArrayList<>(numNodesToFail);
+for (int i = 0; i < numNodesToFail; i++) {
+  int failedNodeIndex = FailureManager.getBoundedRandomIndex(numOms);
+  oms.add(getOzoneManager(failedNodeIndex));

Review comment:
   There are two concerns with adding same node multiple times:
   
   1. trying to fail the same nodes multiple times
   2. not failing the required number of nodes
   
   I think using a set addresses the first item, but not the second one.  This 
could be fixed by generating a random combination of the required size instead 
of generating multiple independent random indexes.
   
   I guess that's OK to address in a followup, because this was broken 
previously, too.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #874: HDDS-3493. Refactor Failures in MiniOzoneChaosCluster into pluggable model.

2020-05-18 Thread GitBox


mukul1987 commented on a change in pull request #874:
URL: https://github.com/apache/hadoop-ozone/pull/874#discussion_r426807982



##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/failure/FailureManager.java
##
@@ -0,0 +1,117 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.ozone.failure;
+
+import org.apache.commons.lang3.RandomUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.ozone.MiniOzoneChaosCluster;
+import org.apache.hadoop.util.ReflectionUtils;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.Executors;
+import java.util.concurrent.ScheduledExecutorService;
+import java.util.concurrent.ScheduledFuture;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * Manages all the failures in the MiniOzoneChaosCluster.
+ */
+public class FailureManager {
+
+  static final Logger LOG =
+  LoggerFactory.getLogger(Failures.class);
+
+  private final MiniOzoneChaosCluster cluster;
+  private final List failures;
+  private ScheduledFuture scheduledFuture;
+  private final ScheduledExecutorService executorService;
+  public FailureManager(MiniOzoneChaosCluster cluster,
+Configuration conf,
+List> clazzes) {
+this.cluster = cluster;
+this.executorService = Executors.newSingleThreadScheduledExecutor();
+
+failures = new ArrayList<>();
+for (Class clazz : clazzes) {
+  Failures f = ReflectionUtils.newInstance(clazz, conf);
+  f.validateFailure(cluster.getOzoneManagersList(),
+  cluster.getStorageContainerManager(),
+  cluster.getHddsDatanodes());
+  failures.add(f);
+}
+
+  }
+
+
+  // Fail nodes randomly at configured timeout period.
+  private void fail() {
+try {
+  Failures f = failures.get(getBoundedRandomIndex(failures.size()));
+  LOG.info("time failure with {}", f.getName());
+  f.fail(cluster);
+} catch (Throwable t) {
+  LOG.info("failing with ", t);

Review comment:
   changed the logging here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-1470) Implement a CLI tool to dump the contents of rocksdb metadata

2020-05-18 Thread Arpit Agarwal (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reassigned HDDS-1470:
---

Assignee: Sadanand Shenoy  (was: Hrishikesh Gadre)

> Implement a CLI tool to dump the contents of rocksdb metadata
> -
>
> Key: HDDS-1470
> URL: https://issues.apache.org/jira/browse/HDDS-1470
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
>Assignee: Sadanand Shenoy
>Priority: Minor
>
> The DataNode plugin for Ozone stores the protobuf message as the value in the 
> rocksdb metadata store. Since the protobuf message contents are not human 
> readable, it is difficult to introspect (e.g. for debugging). This Jira is to 
> add a command-line tool to dump the contents of rocksdb database in human 
> readable format (e.g. json or yaml).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on a change in pull request #874: HDDS-3493. Refactor Failures in MiniOzoneChaosCluster into pluggable model.

2020-05-18 Thread GitBox


mukul1987 commented on a change in pull request #874:
URL: https://github.com/apache/hadoop-ozone/pull/874#discussion_r426791389



##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -358,6 +233,11 @@ protected void initializeConfiguration() throws 
IOException {
   conf.setInt("hdds.scm.replication.event.timeout", 20 * 1000);
   conf.setInt(OzoneConfigKeys.DFS_RATIS_SNAPSHOT_THRESHOLD_KEY, 100);
   conf.setInt(OzoneConfigKeys.DFS_CONTAINER_RATIS_LOG_PURGE_GAP, 100);
+
+  conf.setInt(OMConfigKeys.
+  OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY, 100);
+  conf.setInt(OMConfigKeys.
+  OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY, 100);

Review comment:
   yes, this was supposed to be purge gap.

##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -406,19 +280,71 @@ public MiniOzoneChaosCluster build() throws IOException {
   final List hddsDatanodes = createHddsDatanodes(
   scm, null);
 
-  MiniOzoneChaosCluster cluster;
-  if (failureService == FailureService.DATANODE) {
-cluster = new MiniOzoneDatanodeChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  } else {
-cluster = new MiniOzoneOMChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  }
+  MiniOzoneChaosCluster cluster =
+  new MiniOzoneChaosCluster(conf, omList, scm, hddsDatanodes,
+  omServiceId, clazzes);
 
   if (startDataNodes) {
 cluster.startHddsDatanodes();
   }
   return cluster;
 }
   }
+
+  // OzoneManager specifc
+  public List omToFail() {
+int numNodesToFail = FailureManager.getNumberOfOmToFail();
+if (failedOmSet.size() >= numOzoneManagers/2) {
+  return Collections.emptyList();
+}
+
+int numOms = getOzoneManagersList().size();
+List oms = new ArrayList<>(numNodesToFail);
+for (int i = 0; i < numNodesToFail; i++) {
+  int failedNodeIndex = FailureManager.getBoundedRandomIndex(numOms);
+  oms.add(getOzoneManager(failedNodeIndex));

Review comment:
   Coverted into hashset.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1470) Implement a CLI tool to dump the contents of rocksdb metadata

2020-05-18 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110477#comment-17110477
 ] 

Marton Elek commented on HDDS-1470:
---

Are you planning to work on this [~hgadre]? It seems to be a good time to 
finish, it, and as I know [~sadanand] started to work on a prototype. Are you 
planning to do some work or can we reassign it?

\cc [~msingh]

> Implement a CLI tool to dump the contents of rocksdb metadata
> -
>
> Key: HDDS-1470
> URL: https://issues.apache.org/jira/browse/HDDS-1470
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Minor
>
> The DataNode plugin for Ozone stores the protobuf message as the value in the 
> rocksdb metadata store. Since the protobuf message contents are not human 
> readable, it is difficult to introspect (e.g. for debugging). This Jira is to 
> add a command-line tool to dump the contents of rocksdb database in human 
> readable format (e.g. json or yaml).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3611) Ozone client should not consider closed container error as failure

2020-05-18 Thread Lokesh Jain (Jira)
Lokesh Jain created HDDS-3611:
-

 Summary: Ozone client should not consider closed container error 
as failure
 Key: HDDS-3611
 URL: https://issues.apache.org/jira/browse/HDDS-3611
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Lokesh Jain


ContainerNotOpen exception exception is thrown by datanode when client is 
writing to a non open container. Currently ozone client sees this as failure 
and would increment the retry count. If client reaches a configured retry count 
it fails the write. Map reduce jobs were seen failing due to this error with 
default retry count of 5.

Idea is to not consider errors due to closed container in retry count. This 
would make sure that ozone client writes do not fail due to closed container 
exceptions.
{code:java}
2020-05-15 02:20:28,375 ERROR [main] 
org.apache.hadoop.ozone.client.io.KeyOutputStream: Retry request failed. 
retries get failed due to exceeded maximum allowed retries number: 5
java.io.IOException: Unexpected Storage Container Exception: 
java.util.concurrent.CompletionException: 
java.util.concurrent.CompletionException: 
org.apache.ratis.protocol.StateMachineException: 
org.apache.hadoop.hdds.scm.container.common.helpers.ContainerNotOpenException 
from Server e2eec12f-02c5-46e2-9c23-14d6445db219@group-A3BF3ABDC307: Container 
15 in CLOSED state
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.setIoException(BlockOutputStream.java:551)
at 
org.apache.hadoop.hdds.scm.storage.BlockOutputStream.lambda$writeChunkToContainer$3(BlockOutputStream.java:638)
at 
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:884)
at 
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:866)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.ratis.client.impl.OrderedAsync$PendingOrderedRequest.setReply(OrderedAsync.java:99)
at 
org.apache.ratis.client.impl.OrderedAsync$PendingOrderedRequest.setReply(OrderedAsync.java:60)
at 
org.apache.ratis.util.SlidingWindow$RequestMap.setReply(SlidingWindow.java:143)
at 
org.apache.ratis.util.SlidingWindow$Client.receiveReply(SlidingWindow.java:314)
at 
org.apache.ratis.client.impl.OrderedAsync.lambda$sendRequest$9(OrderedAsync.java:242)
at 
java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at 
java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at 
org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.lambda$onNext$0(GrpcClientProtocolClient.java:284)
at java.util.Optional.ifPresent(Optional.java:159)
at 
org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.handleReplyFuture(GrpcClientProtocolClient.java:340)
at 
org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers.access$100(GrpcClientProtocolClient.java:264)
at 
org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:284)
at 
org.apache.ratis.grpc.client.GrpcClientProtocolClient$AsyncStreamObservers$1.onNext(GrpcClientProtocolClient.java:267)
at 
org.apache.ratis.thirdparty.io.grpc.stub.ClientCalls$StreamObserverToCallListenerAdapter.onMessage(ClientCalls.java:436)
at 
org.apache.ratis.thirdparty.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$1MessagesAvailable.runInternal(ClientCallImpl.java:658)
...{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #865: HDDS-2969. Implement ofs://: Add contract test

2020-05-18 Thread GitBox


smengcl commented on pull request #865:
URL: https://github.com/apache/hadoop-ozone/pull/865#issuecomment-630318136


   #906 is merged. Rebasing this patch in a moment



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3494) Implement ofs://: Support volume and bucket deletion

2020-05-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3494:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Implement ofs://: Support volume and bucket deletion
> 
>
> Key: HDDS-3494
> URL: https://issues.apache.org/jira/browse/HDDS-3494
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> Question:
> - Do we need to support deleting volumes and buckets in OFS? In OFS the first 
> level directories are volumes (with the exception of tmp mount), the second 
> level directories are buckets.
>   - If the answer is yes, do we need to add this before merging to master 
> branch?
> Thanks!
> Related: HDDS-2969 (Add contract test)
> CC [~ste...@apache.org] [~arp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl merged pull request #906: HDDS-3494. Implement ofs://: Support volume and bucket deletion

2020-05-18 Thread GitBox


smengcl merged pull request #906:
URL: https://github.com/apache/hadoop-ozone/pull/906


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #906: HDDS-3494. Implement ofs://: Support volume and bucket deletion

2020-05-18 Thread GitBox


smengcl commented on pull request #906:
URL: https://github.com/apache/hadoop-ozone/pull/906#issuecomment-630310469


   Will merge this soon. Union of retest 1&2 passes all test suites.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #927: HDDS-3597. using protobuf maven plugin instead of the legacy protoc executable file

2020-05-18 Thread GitBox


elek commented on pull request #927:
URL: https://github.com/apache/hadoop-ozone/pull/927#issuecomment-630309380


   >  Will test it soon.
   
   Tested and worked well on Linux. Bye bye `protoc 2.5`...
   
   I am +1 when the `CONTRIBUTION.md` is updated...



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #924: HDDS-3595. Add a maven proto file backward compatibility checker in Ozone.

2020-05-18 Thread GitBox


adoroszlai commented on a change in pull request #924:
URL: https://github.com/apache/hadoop-ozone/pull/924#discussion_r426764724



##
File path: pom.xml
##
@@ -1596,8 +1602,29 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
   
 
   
+
+  
+kr.motd.maven
+os-maven-plugin
+${os-maven-plugin.version}
+  
+
 
   
+
+  com.salesforce.servicelibs
+  proto-backwards-compatibility

Review comment:
   ```suggestion
 com.salesforce.servicelibs
 proto-backwards-compatibility
 ${proto-backwards-compatibility.version}
   ```

##
File path: pom.xml
##
@@ -241,6 +242,11 @@ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xs
 
   
 
+  
+com.salesforce.servicelibs
+proto-backwards-compatibility
+${proto-backwards-compatibility.version}
+  

Review comment:
   ```suggestion
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3599) Implement ofs://: Add contract test for HA

2020-05-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3599:
-
Status: Patch Available  (was: Open)

> Implement ofs://: Add contract test for HA
> --
>
> Key: HDDS-3599
> URL: https://issues.apache.org/jira/browse/HDDS-3599
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> Add contract tests for HA as well.
> Since adding HA contract tests will be another ~10 new classes. [~xyao] and I 
> decided to put HA OFS contract tests in another jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3610) Test Recon works with MySQL and Postgres

2020-05-18 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDDS-3610:
---

 Summary: Test Recon works with MySQL and Postgres
 Key: HDDS-3610
 URL: https://issues.apache.org/jira/browse/HDDS-3610
 Project: Hadoop Distributed Data Store
  Issue Type: Test
  Components: Ozone Recon
Affects Versions: 0.6.0
Reporter: Stephen O'Donnell


Until now, Recon has only been tested with embedded DBs - Sqlite and Derby, 
with Derby being the default.

In theory it should work with MySQL and Postgres without any changes, but we 
would like to verify that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3574) Implement ofs://: Override getTrashRoot

2020-05-18 Thread Siyao Meng (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siyao Meng updated HDDS-3574:
-
Status: Patch Available  (was: In Progress)

> Implement ofs://: Override getTrashRoot
> ---
>
> Key: HDDS-3574
> URL: https://issues.apache.org/jira/browse/HDDS-3574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> [~pifta] found if we delete file with Hadoop shell, namely {{hadoop fs -rm}}, 
> without {{-skipTrash}} option, the operation would fail in OFS due to the 
> client is renaming the file to {{/user//.Trash/}} because renaming 
> across different buckets is not allowed in Ozone. (Unless the file happens to 
> be under {{/user//}}, apparently.)
> We could override {{getTrashRoot()}} in {{BasicOzoneFileSystem}} to a dir 
> under the same bucket to mitigate the problem. Thanks [~umamaheswararao] for 
> the suggestion.
> This raises one more problem though: We need to implement trash clean up on 
> OM. Opened HDDS-3575 for this.
> CC [~arp] [~bharat]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl edited a comment on pull request #928: HDDS-3599. Implement ofs://: Add contract test for HA

2020-05-18 Thread GitBox


smengcl edited a comment on pull request #928:
URL: https://github.com/apache/hadoop-ozone/pull/928#issuecomment-630301984


   Just a note. There is an integration test timeout issue when running on my 
Mac. Every time a second HA cluster is run (after the first test finishes and 
shutdown the cluster) all the tests in the second test suite times out for me 
even though the second cluster boots up. But in pr-check in Github Actions it 
runs [just 
fine](https://github.com/apache/hadoop-ozone/pull/928/checks?check_run_id=680451326)
 (it fails for other normal reasons).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #928: HDDS-3599. Implement ofs://: Add contract test for HA

2020-05-18 Thread GitBox


smengcl commented on pull request #928:
URL: https://github.com/apache/hadoop-ozone/pull/928#issuecomment-630301984


   Just a note. There is an integration test timeout issue when running on my 
Mac. Every time a second HA cluster is run all the tests in that test suite 
times out for me. But in pr-check in Github Actions it runs [just 
fine](https://github.com/apache/hadoop-ozone/pull/928/checks?check_run_id=680451326)
 (it fails for other normal reasons).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on pull request #924: HDDS-3595. Add a maven proto file backward compatibility checker in Ozone.

2020-05-18 Thread GitBox


mukul1987 commented on pull request #924:
URL: https://github.com/apache/hadoop-ozone/pull/924#issuecomment-630301975


   @elek, can you please have another look at this change ?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on pull request #924: HDDS-3595. Add a maven proto file backward compatibility checker in Ozone.

2020-05-18 Thread GitBox


mukul1987 commented on pull request #924:
URL: https://github.com/apache/hadoop-ozone/pull/924#issuecomment-630301738


   Do we need to check the backward compatibility for RPC or just for the 
persisted data? If only for the persisted data, it would be more useful to 
separated the two RPC set.
   >> backward compatibility should be on both the protos both for express as 
well as rolling upgrades.
   
   What is the suggested way to use this? I assume that we can do any 
incompatible change with upgrading the lock file together with the proto.
   >> this fails the compilation with an error so that the user can look into 
the error and identify that the change is backward incompatible.
   
   What should we put to the lock file? Is the lock file from the last release 
should be added and upgrade with each release? Or the first/(last?) version 
from the previous major line?
   >> The lock file should be committed along with the source code to identify 
backward incompatibility issues.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3599) Implement ofs://: Add contract test for HA

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3599:
-
Labels: pull-request-available  (was: )

> Implement ofs://: Add contract test for HA
> --
>
> Key: HDDS-3599
> URL: https://issues.apache.org/jira/browse/HDDS-3599
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
>
> Add contract tests for HA as well.
> Since adding HA contract tests will be another ~10 new classes. [~xyao] and I 
> decided to put HA OFS contract tests in another jira.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #928: HDDS-3599. Implement ofs://: Add contract test for HA

2020-05-18 Thread GitBox


smengcl commented on pull request #928:
URL: https://github.com/apache/hadoop-ozone/pull/928#issuecomment-630299167


   > Hey Siyao, not sure why, but it looks like 
https://issues.apache.org/jira/browse/HDDS-3559 is linked to his JIRA. I think 
you mean to link to 
[HDDS-3599](https://issues.apache.org/jira/browse/HDDS-3599). Could you do 
manual correction for it?
   
   Ah sorry about the typo. Thanks @elek for correcting it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #813: HDDS-3309. Add TimedOutTestsListener to surefire

2020-05-18 Thread GitBox


smengcl commented on pull request #813:
URL: https://github.com/apache/hadoop-ozone/pull/813#issuecomment-630296429


   Rebased again since HDDS-3602 is merged.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on pull request #924: HDDS-3595. Add a maven proto file backward compatibility checker in Ozone.

2020-05-18 Thread GitBox


mukul1987 commented on pull request #924:
URL: https://github.com/apache/hadoop-ozone/pull/924#issuecomment-630295146


   @adoroszlai , Thanks for the review. Have addressed the comments in the 
followup patch.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] smengcl commented on pull request #813: HDDS-3309. Add TimedOutTestsListener to surefire

2020-05-18 Thread GitBox


smengcl commented on pull request #813:
URL: https://github.com/apache/hadoop-ozone/pull/813#issuecomment-630282726


   @adoroszlai  I have rebased the jira and added commits similar to #750 . I 
have also left all ignored integration test classes untouched from the timeout 
change.
   
   Please take a look. :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #937: HDDS-3607. Lot of warnings at DN startup.

2020-05-18 Thread GitBox


sodonnel commented on a change in pull request #937:
URL: https://github.com/apache/hadoop-ozone/pull/937#discussion_r426732321



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
##
@@ -108,9 +108,11 @@ public long putBlock(Container container, BlockData data) 
throws IOException {
 // transaction is reapplied in the ContainerStateMachine on restart.
 // It also implies that the given block must already exist in the db.
 // just log and return
-LOG.warn("blockCommitSequenceId {} in the Container Db is greater than"
-+ " the supplied value {}. Ignoring it",
-containerBCSId, bcsId);
+if (LOG.isDebugEnabled()) {

Review comment:
   http://www.slf4j.org/faq.html#logging_performance suggests the IF 
wrapper is not needed for performance reasons.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] sodonnel commented on a change in pull request #937: HDDS-3607. Lot of warnings at DN startup.

2020-05-18 Thread GitBox


sodonnel commented on a change in pull request #937:
URL: https://github.com/apache/hadoop-ozone/pull/937#discussion_r426717257



##
File path: 
hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/impl/BlockManagerImpl.java
##
@@ -108,9 +108,11 @@ public long putBlock(Container container, BlockData data) 
throws IOException {
 // transaction is reapplied in the ContainerStateMachine on restart.
 // It also implies that the given block must already exist in the db.
 // just log and return
-LOG.warn("blockCommitSequenceId {} in the Container Db is greater than"
-+ " the supplied value {}. Ignoring it",
-containerBCSId, bcsId);
+if (LOG.isDebugEnabled()) {

Review comment:
   Is it necessary to wrap a debug log entry in the `if 
(LOG.isDebugEnabled())` statement? It is my understanding, that so long as you 
are not doing string interpolation in the log message and are passing simple 
variables as any parameters to the log message (ie not doing any computation to 
calculate the values being passed), then LOG4J does the correct thing and there 
is no performance penalty to removing the IF wrapper.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] adoroszlai commented on a change in pull request #874: HDDS-3493. Refactor Failures in MiniOzoneChaosCluster into pluggable model.

2020-05-18 Thread GitBox


adoroszlai commented on a change in pull request #874:
URL: https://github.com/apache/hadoop-ozone/pull/874#discussion_r426571715



##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -358,6 +233,11 @@ protected void initializeConfiguration() throws 
IOException {
   conf.setInt("hdds.scm.replication.event.timeout", 20 * 1000);
   conf.setInt(OzoneConfigKeys.DFS_RATIS_SNAPSHOT_THRESHOLD_KEY, 100);
   conf.setInt(OzoneConfigKeys.DFS_CONTAINER_RATIS_LOG_PURGE_GAP, 100);
+
+  conf.setInt(OMConfigKeys.
+  OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY, 100);
+  conf.setInt(OMConfigKeys.
+  OZONE_OM_RATIS_SNAPSHOT_AUTO_TRIGGER_THRESHOLD_KEY, 100);

Review comment:
   Duplicate config setting.  Did you intend to set some other property?

##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -406,19 +280,71 @@ public MiniOzoneChaosCluster build() throws IOException {
   final List hddsDatanodes = createHddsDatanodes(
   scm, null);
 
-  MiniOzoneChaosCluster cluster;
-  if (failureService == FailureService.DATANODE) {
-cluster = new MiniOzoneDatanodeChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  } else {
-cluster = new MiniOzoneOMChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  }
+  MiniOzoneChaosCluster cluster =
+  new MiniOzoneChaosCluster(conf, omList, scm, hddsDatanodes,
+  omServiceId, clazzes);
 
   if (startDataNodes) {
 cluster.startHddsDatanodes();
   }
   return cluster;
 }
   }
+
+  // OzoneManager specifc
+  public List omToFail() {
+int numNodesToFail = FailureManager.getNumberOfOmToFail();
+if (failedOmSet.size() >= numOzoneManagers/2) {
+  return Collections.emptyList();
+}
+
+int numOms = getOzoneManagersList().size();
+List oms = new ArrayList<>(numNodesToFail);
+for (int i = 0; i < numNodesToFail; i++) {
+  int failedNodeIndex = FailureManager.getBoundedRandomIndex(numOms);
+  oms.add(getOzoneManager(failedNodeIndex));
+}
+return oms;
+  }
+
+  public void shutdownOzoneManager(OzoneManager om) {
+super.shutdownOzoneManager(om);
+failedOmSet.add(om);
+  }
+
+  public void restartOzoneManager(OzoneManager om, boolean waitForOM)
+  throws IOException, TimeoutException, InterruptedException {
+super.restartOzoneManager(om, waitForOM);
+failedOmSet.remove(om);
+  }
+
+  // Should the selected node be stopped or started.
+  public boolean shouldStop() {
+if (failedOmSet.size() >= numOzoneManagers/2) {
+  return false;
+}
+return RandomUtils.nextBoolean();
+  }
+
+  public List dnToFail() {
+int numNodesToFail = FailureManager.getNumberOfDnToFail();
+int numDns = getHddsDatanodes().size();
+List dns = new ArrayList<>(numNodesToFail);
+for (int i = 0; i < numNodesToFail; i++) {
+  int failedNodeIndex = FailureManager.getBoundedRandomIndex(numDns);
+  dns.add(getHddsDatanodes().get(failedNodeIndex).getDatanodeDetails());
+}
+return dns;
+  }
+  
+  @Override
+  public void restartHddsDatanode(DatanodeDetails dn, boolean waitForDatanode)
+  throws InterruptedException, TimeoutException, IOException {
+super.restartHddsDatanode(dn, waitForDatanode);
+  }
+
+  @Override
+  public void shutdownHddsDatanode(int i) {
+super.shutdownHddsDatanode(i);
+  }

Review comment:
   Do we need these overrides?

##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -264,12 +121,30 @@ public void shutdown() {
 }
   }
 
+  /**
+   * Check if cluster is ready for a restart or shutdown of an OM node. If
+   * yes, then set isClusterReady to false so that another thread cannot
+   * restart/ shutdown OM till all OMs are up again.
+   */
+  public void waitForClusterToBeReady()

Review comment:
   ```suggestion
 @Override
 public void waitForClusterToBeReady()
   ```

##
File path: 
hadoop-ozone/fault-injection-test/mini-chaos-tests/src/test/java/org/apache/hadoop/ozone/MiniOzoneChaosCluster.java
##
@@ -406,19 +280,71 @@ public MiniOzoneChaosCluster build() throws IOException {
   final List hddsDatanodes = createHddsDatanodes(
   scm, null);
 
-  MiniOzoneChaosCluster cluster;
-  if (failureService == FailureService.DATANODE) {
-cluster = new MiniOzoneDatanodeChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  } else {
-cluster = new MiniOzoneOMChaosCluster(conf, omList, scm,
-hddsDatanodes, omServiceId);
-  }
+  MiniOzoneChaosCluster cluster =
+  new MiniOzoneChaosCluster(conf, 

[GitHub] [hadoop-ozone] adoroszlai commented on pull request #931: HDDS-3602. Fix KeyInputStream by adding a timeout exception.

2020-05-18 Thread GitBox


adoroszlai commented on pull request #931:
URL: https://github.com/apache/hadoop-ozone/pull/931#issuecomment-630240967


   Sorry for being late: can you please explain how adding a timeout fixes the 
test?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1470) Implement a CLI tool to dump the contents of rocksdb metadata

2020-05-18 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-1470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110343#comment-17110343
 ] 

Marton Elek commented on HDDS-1470:
---

I am not sure what is the exact plan here, we had a discussion a few days back:

 1. Rocksdb already have an ldb tool which is very close what we need (it 
provides generic get / scan interface)
 2. But it doesn't have the logic to parse the key based on the available codec 
(SCMDBDefinition can help and we can also defined different definitions for OM 
and Datanode)
 3. The CLI interface (arguments, subcommands) can be very close to the ldb tool
 3. Old ozone sql tool can be removed
 4.  We can support the original ozone sql use case: dump database as SQL 
queries (it's just a different output format).

> Implement a CLI tool to dump the contents of rocksdb metadata
> -
>
> Key: HDDS-1470
> URL: https://issues.apache.org/jira/browse/HDDS-1470
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Hrishikesh Gadre
>Assignee: Hrishikesh Gadre
>Priority: Minor
>
> The DataNode plugin for Ozone stores the protobuf message as the value in the 
> rocksdb metadata store. Since the protobuf message contents are not human 
> readable, it is difficult to introspect (e.g. for debugging). This Jira is to 
> add a command-line tool to dump the contents of rocksdb database in human 
> readable format (e.g. json or yaml).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3354) OM HA replay optimization

2020-05-18 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110342#comment-17110342
 ] 

Marton Elek commented on HDDS-3354:
---

Thanks to explain, and also thanks the offline updates about this plan. It was 
not clear for me why was it decided to solve the problem in this way (there are 
1-2 other ways to do the same, which was not mentioned as considered 
alternatives. For example: use RocksDB checkpoints and snapshot db together 
with Ratis log. Or keep the list of the active keys always in the memory and 
store only the values in the database.)

But I am fine with this approach, unless it causes significant performance 
degradation. And If I understood well this is the case: it introduce some new 
IO pressure but also removes a lot of unnecessary queries which can make the 
overall picture even better than before.  

As I know, you have some initial numbers about the performance. I would propose 
to share the here.

Thanks again to explain it to me offline, multiple times.

> OM HA replay optimization
> -
>
> Key: HDDS-3354
> URL: https://issues.apache.org/jira/browse/HDDS-3354
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
>Priority: Major
> Attachments: OM HA Replay.pdf
>
>
> This Jira is to improve the OM HA replay scenario.
> Attached the design document which discusses about the proposal and issue in 
> detail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-3589) Support running HBase on Ozone.

2020-05-18 Thread Marton Elek (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110335#comment-17110335
 ] 

Marton Elek commented on HDDS-3589:
---

Thanks to work in this [~sadanand_shenoy]. 

Can you please upload the patch as a github pull request (when you think, it's 
ready). All of our CI scripts are executed for pull requests only.

> Support running HBase on Ozone.
> ---
>
> Key: HDDS-3589
> URL: https://issues.apache.org/jira/browse/HDDS-3589
> Project: Hadoop Distributed Data Store
>  Issue Type: New Feature
>Reporter: Sadanand Shenoy
>Assignee: Sadanand Shenoy
>Priority: Major
> Attachments: Hflush_impl.patch
>
>
> The aim of this Jira is to support Hbase to run on top of Ozone. In order to 
> achieve this , the Syncable interface was implemented which contains the 
> hflush() API which basically commits an open key into OM.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] mukul1987 commented on pull request #911: HDDS-3564. Update Ozone to latest Ratis Snapshot (0.6.0-3596a58-SNAPSHOT).

2020-05-18 Thread GitBox


mukul1987 commented on pull request #911:
URL: https://github.com/apache/hadoop-ozone/pull/911#issuecomment-630212368


   The frequent test failures here are fixed via HDDS-3601 and HDDS-3602



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek commented on pull request #920: HDDS-3581. Make replicationfactor can be set as a number

2020-05-18 Thread GitBox


elek commented on pull request #920:
URL: https://github.com/apache/hadoop-ozone/pull/920#issuecomment-630201297


   Thanks to explain it @maobaolong 
   
   > not sure if we should define a message StorageClass in the proto file 
which also contains replicationType and replicationFactor, 
   
   Not exactly. My proposal to include only one "enum/int" in the proto file 
and remove replicationFactor and replicationType. For example use `1=STANDARAD` 
instead of `RATIS/THREE` in the proto file.
   
   The exact meaning of STANDARD(1) can be resolved on server side (and can be 
changed over time). 
   
   > In fact, i have proved Ozone can write 2 replica directly through Ratis... 
   
   I agree with your analyses about the effectiveness of 
`Ratis/THREE->Closed/TWO` or Ratis/ONE->Closed/TWO`. I was just not sure how is 
it possible to use TWO with Ratis. Do you use two nodes Ratis cluster? It has 
totally different availability guarantee than Ratis/THREE as there is no 
majority just full quorum. I guess it works only if you have two nodes all the 
time and It couldn't work with loosing any of the nodes.
   
   I am interested if you have more details to share...
   
   *In general*:
   
   I wouldn't like to block this effort. Based on the `storage-class` proposal 
this change is not required any more as it can be replaced with an even more 
generic approach (propagate only the ID of the storage-class instead of 
replication type / factor). 
   
   I am more interested about the consensus about the long-term divergence 
(long-term = until the next release...), if you need this change immediately, I 
am not against it.
   
   ** Footnote **: 
   
   Just some notes, not closely related. Yesterday I read the [paper of the 
ChubaoFs](https://arxiv.org/pdf/1911.03001v1.pdf) and it has very interesting 
approach. The motivation is different, but they two different kind of 
replications for different use cases:
   
   > The storage engine guarantees the strong consistencyamong the replicas 
through either primary-backup or Raft-based replication protocols. This design 
decision is basedon the observations that the former one is not suitable 
foroverwrite as the replication performance needs to be com-promised, and the 
latter one has write amplification issue asit introduces extra IO of writing 
the log files.
   
   With introducing storage-class AND using specific containers / replication 
(like Erasure Coded containers) our approach can be somewhat similar
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] elek edited a comment on pull request #920: HDDS-3581. Make replicationfactor can be set as a number

2020-05-18 Thread GitBox


elek edited a comment on pull request #920:
URL: https://github.com/apache/hadoop-ozone/pull/920#issuecomment-630201297


   Thanks to explain it @maobaolong 
   
   > not sure if we should define a message StorageClass in the proto file 
which also contains replicationType and replicationFactor, 
   
   Not exactly. My proposal to include only one "enum/int" in the proto file 
and remove replicationFactor and replicationType. For example use `1=STANDARAD` 
instead of `RATIS/THREE` in the proto file.
   
   The exact meaning of STANDARD(1) can be resolved on server side (and can be 
changed over time). 
   
   > In fact, i have proved Ozone can write 2 replica directly through Ratis... 
   
   I agree with your analyses about the effectiveness of 
`Ratis/THREE->Closed/TWO` or Ratis/ONE->Closed/TWO`. I was just not sure how is 
it possible to use TWO with Ratis. Do you use two nodes Ratis cluster? It has 
totally different availability guarantee than Ratis/THREE as there is no 
majority just full quorum. I guess it works only if you have two nodes all the 
time and It couldn't work with loosing any of the nodes.
   
   I am interested if you have more details to share...
   
   **In general**:
   
   I wouldn't like to block this effort. Based on the `storage-class` proposal 
this change is not required any more as it can be replaced with an even more 
generic approach (propagate only the ID of the storage-class instead of 
replication type / factor). 
   
   I am more interested about the consensus about the long-term divergence 
(long-term = until the next release...), if you need this change immediately, I 
am not against it.
   
   **Footnote**: 
   
   Just some notes, not closely related. Yesterday I read the [paper of the 
ChubaoFs](https://arxiv.org/pdf/1911.03001v1.pdf) and it has very interesting 
approach. The motivation is different, but they two different kind of 
replications for different use cases:
   
   > The storage engine guarantees the strong consistencyamong the replicas 
through either primary-backup or Raft-based replication protocols. This design 
decision is basedon the observations that the former one is not suitable 
foroverwrite as the replication performance needs to be com-promised, and the 
latter one has write amplification issue asit introduces extra IO of writing 
the log files.
   
   With introducing storage-class AND using specific containers / replication 
(like Erasure Coded containers) our approach can be somewhat similar
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #920: HDDS-3581. Make replicationfactor can be set as a number

2020-05-18 Thread GitBox


maobaolong commented on pull request #920:
URL: https://github.com/apache/hadoop-ozone/pull/920#issuecomment-630185763


   @elek Thank you for watching my PR. I have read the mail list link you give 
me before, and i agree this approach is one of the approach to solve the multi 
replication factor support, i think there are no conflict between my PR and the 
storage-class approach.
   
   This PR aimed to remove the limitation of replication factor through 
protocol, instead, i use an integer type to transfer the replication factor 
from client to server, if we want to set up some limitation, we can use some 
new configuration. i think this PR is one the way to the storage-class approach.
   
   Lets talk about the storage-class. 
   - if we set the storage class like this (RATIS/THREE --> CLOSED/TWO), not 
sure if we should define a message StorageClass in the proto file which also 
contains replicationType and replicationFactor, if this design follow this PR, 
it can be (RATIS/3 --> CLOSED/2). 
   - BTW, i think (RATIS/THREE --> CLOSED/TWO) is a good idea to follow the 
ratis way, but it bring a extra replica writing work during the put operation, 
for write performance of replication factor TWO, Ozone can be slower than HDFS 
does.
   - About (RATIS/ONE --> CLOSED/TWO). It is another idea, it looks pretty for 
performance, but for data redundancy, i think it really bring some losing data 
risk between the ONE->TWO.
   
   So, i think storage-class approach is a good approach, but we should have 
more discussion work for its design, after that, it can be a pretty approach. 
And my proposal of this PR only want to open limitation of replication factor, 
the following work is support more replication factor while writing directly. 
If fact, i have proved Ozone can write 2 replica directly through ratis in my 
private branch, i will push more PR after this PR.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on pull request #937: HDDS-3607. Lot of warnings at DN startup.

2020-05-18 Thread GitBox


fapifta commented on pull request #937:
URL: https://github.com/apache/hadoop-ozone/pull/937#issuecomment-630181617


   Hmm... there were an integration test failure that was not related (usual 
timeout we see in integration tests), I am unsure how to re-initiate testing, 
but it does not seem to work... :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on pull request #937: HDDS-3607. Lot of warnings at DN startup.

2020-05-18 Thread GitBox


fapifta commented on pull request #937:
URL: https://github.com/apache/hadoop-ozone/pull/937#issuecomment-630179610


   /retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] ChenSammi commented on a change in pull request #935: HDDS-3606. Add datanode port into the printTopology command output

2020-05-18 Thread GitBox


ChenSammi commented on a change in pull request #935:
URL: https://github.com/apache/hadoop-ozone/pull/935#discussion_r426612777



##
File path: 
hadoop-hdds/tools/src/main/java/org/apache/hadoop/hdds/scm/cli/TopologySubcommand.java
##
@@ -116,15 +116,27 @@ private void printOrderedByLocation(List 
nodes) {
 });
   }
 
+  private String formatPortOutput(List ports) {
+StringBuilder sb = new StringBuilder();
+for (int i = 0; i < ports.size(); i++) {
+  HddsProtos.Port port = ports.get(i);
+  sb.append(port.getName() + "=" + port.getValue());
+  if (i < ports.size() -1) {

Review comment:
   need extra space between - and 1.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] fapifta commented on pull request #937: HDDS-3607. Lot of warnings at DN startup.

2020-05-18 Thread GitBox


fapifta commented on pull request #937:
URL: https://github.com/apache/hadoop-ozone/pull/937#issuecomment-630166960


   /retest



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-3609) Avoid to use Hadoop3.x IOUtils in Ozone Client

2020-05-18 Thread Marton Elek (Jira)
Marton Elek created HDDS-3609:
-

 Summary: Avoid to use Hadoop3.x IOUtils in Ozone Client 
 Key: HDDS-3609
 URL: https://issues.apache.org/jira/browse/HDDS-3609
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Marton Elek
Assignee: Marton Elek


To support Hadoop 2.x it's better to avoid to use Hadoop 3.x specific utility 
methods from IOUtils



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong commented on pull request #938: HDDS-3608. NPE while process a pipeline report when PipelineQuery absent in query2OpenPipelines

2020-05-18 Thread GitBox


maobaolong commented on pull request #938:
URL: https://github.com/apache/hadoop-ozone/pull/938#issuecomment-630156702


   Hey @linyiqun , would you like to take a look at this PR? Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3608) NPE while process a pipeline report when PipelineQuery absent in query2OpenPipelines

2020-05-18 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-3608:
-
Labels: pull-request-available  (was: )

> NPE while process a pipeline report when PipelineQuery absent in 
> query2OpenPipelines
> 
>
> Key: HDDS-3608
> URL: https://issues.apache.org/jira/browse/HDDS-3608
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>  Components: SCM
>Affects Versions: 0.6.0
>Reporter: maobaolong
>Assignee: maobaolong
>Priority: Major
>  Labels: pull-request-available
>
> 2020-05-18 19:21:11,171 [EventQueue-PipelineReportForPipelineReportHandler] 
> ERROR org.apache.hadoop.hdds.server.events.SingleThreadExecutor: Error on 
> execution message 
> org.apache.hadoop.hdds.scm.server.SCMDatanodeHeartbeatDispatcher$PipelineReportFromDatanode@501c46e8
> java.lang.NullPointerException
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateMap.updatePipelineState(PipelineStateMap.java:380)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineStateManager.openPipeline(PipelineStateManager.java:132)
> at 
> org.apache.hadoop.hdds.scm.pipeline.SCMPipelineManager.openPipeline(SCMPipelineManager.java:375)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineReportHandler.processPipelineReport(PipelineReportHandler.java:115)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineReportHandler.onMessage(PipelineReportHandler.java:83)
> at 
> org.apache.hadoop.hdds.scm.pipeline.PipelineReportHandler.onMessage(PipelineReportHandler.java:46)
> at 
> org.apache.hadoop.hdds.server.events.SingleThreadExecutor.lambda$onMessage$1(SingleThreadExecutor.java:81)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[GitHub] [hadoop-ozone] maobaolong opened a new pull request #938: HDDS-3608. NPE while process a pipeline report when PipelineQuery absent in query2OpenPipelines

2020-05-18 Thread GitBox


maobaolong opened a new pull request #938:
URL: https://github.com/apache/hadoop-ozone/pull/938


   ## What changes were proposed in this pull request?
   
   Fix a NPE while SCM receive and handle a datanode pipeline report, 
PipelineStateMap will get the pipelineList from `query2OpenPipelines`, and put 
the `updatedPipeline` to the list, but if the list is NULL, NPE will occur. 
   
   ## What is the link to the Apache JIRA
   
   https://issues.apache.org/jira/browse/HDDS-3608
   
   ## How was this patch tested?
   
   No need
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-3598) Allow existing buckets to enable trash ability.

2020-05-18 Thread YiSheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-3598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

YiSheng Lien updated HDDS-3598:
---
Description: 
According the consideration-6 of 
[design_doc|https://issues.apache.org/jira/secure/attachment/12985273/Ozone_Trash_Feature.docx],
 
this Jira aims to enable the existing bucket to modify the property of trash 
ability.

  was:
According the consideration-6 of [^Ozone_Trash_Feature.docx], 
this Jira aims to enable the existing bucket to modify the property of trash 
ability.


> Allow existing buckets to enable trash ability.
> ---
>
> Key: HDDS-3598
> URL: https://issues.apache.org/jira/browse/HDDS-3598
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: YiSheng Lien
>Assignee: YiSheng Lien
>Priority: Major
>
> According the consideration-6 of 
> [design_doc|https://issues.apache.org/jira/secure/attachment/12985273/Ozone_Trash_Feature.docx],
>  
> this Jira aims to enable the existing bucket to modify the property of trash 
> ability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: ozone-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: ozone-issues-h...@hadoop.apache.org



  1   2   >