[jira] [Updated] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-19 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui updated HDFS-14998:
---
Component/s: documentation
   Priority: Minor  (was: Major)

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Minor
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-19 Thread Fei Hui (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fei Hui reassigned HDFS-14998:
--

Assignee: Fei Hui

> Update Observer Namenode doc for ZKFC after HDFS-14130
> --
>
> Key: HDFS-14998
> URL: https://issues.apache.org/jira/browse/HDFS-14998
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 3.3.0, 3.2.1, 3.1.3
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
>
> After HDFS-14130, we should update observer namenode doc, observer namenode 
> can run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14998) Update Observer Namenode doc for ZKFC after HDFS-14130

2019-11-19 Thread Fei Hui (Jira)
Fei Hui created HDFS-14998:
--

 Summary: Update Observer Namenode doc for ZKFC after HDFS-14130
 Key: HDFS-14998
 URL: https://issues.apache.org/jira/browse/HDFS-14998
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 3.1.3, 3.2.1, 3.3.0
Reporter: Fei Hui


After HDFS-14130, we should update observer namenode doc, observer namenode can 
run with ZKFC running



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14961) TestDFSZKFailoverController fails consistently

2019-11-19 Thread Fei Hui (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978141#comment-16978141
 ] 

Fei Hui commented on HDFS-14961:


[~ayushtkn] Thanks for your explanation.
After HDFS-14130, doc for Observer Namenode should be update. Will file a new 
jira to fix the doc

> TestDFSZKFailoverController fails consistently
> --
>
> Key: HDFS-14961
> URL: https://issues.apache.org/jira/browse/HDFS-14961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14961-01.patch, HDFS-14961-02.patch
>
>
> TestDFSZKFailoverController has been consistently failing with a time out 
> waiting in testManualFailoverWithDFSHAAdmin(). In particular 
> {{waitForHAState(1, HAServiceState.OBSERVER);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14994) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978119#comment-16978119
 ] 

Lisheng Sun commented on HDFS-14994:


[~elgoiri] the v003 patch fixed the UT. Could you help continue to review it? 
Thank you.

> Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
> 
>
> Key: HDFS-14994
> URL: https://issues.apache.org/jira/browse/HDFS-14994
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14994.001.patch, HDFS-14994.002.patch, 
> HDFS-14994.003.patch
>
>
> when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
> need replica. 
> in current code if use continue, there is one more invalid judgment (priority 
> ==QUEUE_WITH_CORRUPT_BLOCKS).
> i think it should use break instread of continue.
> {code:java}
>  */
> synchronized List> chooseLowRedundancyBlocks(
> int blocksToProcess) {
>   final List> blocksToReconstruct = new ArrayList<>(LEVEL);
>   int count = 0;
>   int priority = 0;
>   for (; count < blocksToProcess && priority < LEVEL; priority++) {
> if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
>   // do not choose corrupted blocks.
>   continue;
> }
> ...
>
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14994) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HDFS-14994:
---
Attachment: HDFS-14994.003.patch

> Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
> 
>
> Key: HDFS-14994
> URL: https://issues.apache.org/jira/browse/HDFS-14994
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14994.001.patch, HDFS-14994.002.patch, 
> HDFS-14994.003.patch
>
>
> when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
> need replica. 
> in current code if use continue, there is one more invalid judgment (priority 
> ==QUEUE_WITH_CORRUPT_BLOCKS).
> i think it should use break instread of continue.
> {code:java}
>  */
> synchronized List> chooseLowRedundancyBlocks(
> int blocksToProcess) {
>   final List> blocksToReconstruct = new ArrayList<>(LEVEL);
>   int count = 0;
>   int priority = 0;
>   for (; count < blocksToProcess && priority < LEVEL; priority++) {
> if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
>   // do not choose corrupted blocks.
>   continue;
> }
> ...
>
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2493) Sonar: Locking on a parameter in NetUtils.removeOutscope

2019-11-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2493?focusedWorklogId=346496=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346496
 ]

ASF GitHub Bot logged work on HDDS-2493:


Author: ASF GitHub Bot
Created on: 20/Nov/19 06:48
Start Date: 20/Nov/19 06:48
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #174: 
HDDS-2493. Sonar: Locking on a parameter in NetUtils.removeOutscope.
URL: https://github.com/apache/hadoop-ozone/pull/174
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346496)
Time Spent: 20m  (was: 10m)

> Sonar: Locking on a parameter in NetUtils.removeOutscope
> 
>
> Key: HDDS-2493
> URL: https://issues.apache.org/jira/browse/HDDS-2493
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2hKcVY8lQ4ZsNd=false=BUG



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2493) Sonar: Locking on a parameter in NetUtils.removeOutscope

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia resolved HDDS-2493.
-
Resolution: Fixed

> Sonar: Locking on a parameter in NetUtils.removeOutscope
> 
>
> Key: HDDS-2493
> URL: https://issues.apache.org/jira/browse/HDDS-2493
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: SCM
>Affects Versions: 0.5.0
>Reporter: Siddharth Wagle
>Assignee: Siddharth Wagle
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2hKcVY8lQ4ZsNd=false=BUG



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14961) TestDFSZKFailoverController fails consistently

2019-11-19 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HDFS-14961:

Attachment: HDFS-14961-02.patch

> TestDFSZKFailoverController fails consistently
> --
>
> Key: HDFS-14961
> URL: https://issues.apache.org/jira/browse/HDFS-14961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14961-01.patch, HDFS-14961-02.patch
>
>
> TestDFSZKFailoverController has been consistently failing with a time out 
> waiting in testManualFailoverWithDFSHAAdmin(). In particular 
> {{waitForHAState(1, HAServiceState.OBSERVER);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2581) Use Java Configs for OM HA

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2581:

Summary: Use Java Configs for OM HA  (was: Make OM Ha config to use Java 
Configs)

> Use Java Configs for OM HA
> --
>
> Key: HDDS-2581
> URL: https://issues.apache.org/jira/browse/HDDS-2581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> This Jira is created based on the comments from [~aengineer] during HDDS-2536 
> review.
> Can we please use the Java Configs instead of this old-style config to add a 
> config?
>  
> This Jira it to make all HA OM config to the new style (Java config based 
> approach)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14961) TestDFSZKFailoverController fails consistently

2019-11-19 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978106#comment-16978106
 ] 

Ayush Saxena commented on HDFS-14961:
-

Thanx [~ferhui] for giving a check, In the starting it was supposed to be like 
that, ZKFC shouldn't be running for the ONN, but post HDFS-14130, it is 
allowed. It made Observer ZKFC aware and it works in all cases, If you check, 
apart from this Race condition, Seems every case is handled, ONN will not 
participate in Election and all.
Ofcourse, Stopping the third ZKFC would make the test pass, but I think it will 
break the intent for which it was added. After HDFS-14130, it is supposed that 
ZKFC shouldn't bother ONN and doesn't try converting it to SNN. check 
description of HDFS-14130 :

{noformat}
Need to fix automatic failover with ZKFC. Currently it does not know about 
ObserverNodes trying to convert them to SBNs.
{noformat}

If I just fix the test by closing the ZKFC for third ONN, Then it would be like 
ZKFC can run with ONN, but once ONN has started then only ZKFC can start, so as 
to avoid ZKFC seeing the NN in a previous state than OBSERVER, which allows 
participation in election. 

The present fix, Just ensures ONN doesn't get instructed by ZKFC. Since ONN 
isn't suppose to participate in election. Seems safe enough.

[~elgoiri]

bq. If I understand correctly, this is not a flaky test but the logic is not 
correct.

Yes, There seems a problem with the logic itself.

bq. Here we are preventing ZKFC making an OBSERVER NN STANDBY, right?

Yes, We are preventing ZKFC to turn ONN to SNN, Since ONN isn't suppose to 
participate in Election. 

 bq. Do we have any place where we explain the flow?

Flow as in the ZKFC election part? I don't think so, there is much detailing of 
the process. I too have limited knowledge only on the flow. The ZKFC managing 
states of Namenode runs parallel, and is independent of DFSAdmin Commands 
instructing change of states. 

bq. We should change the title and adapt the description accordingly.

Sure, will change it accordingly.

> TestDFSZKFailoverController fails consistently
> --
>
> Key: HDFS-14961
> URL: https://issues.apache.org/jira/browse/HDFS-14961
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Íñigo Goiri
>Assignee: Ayush Saxena
>Priority: Major
> Attachments: HDFS-14961-01.patch
>
>
> TestDFSZKFailoverController has been consistently failing with a time out 
> waiting in testManualFailoverWithDFSHAAdmin(). In particular 
> {{waitForHAState(1, HAServiceState.OBSERVER);}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-11-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2247?focusedWorklogId=346478=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346478
 ]

ASF GitHub Bot logged work on HDDS-2247:


Author: ASF GitHub Bot
Created on: 20/Nov/19 06:09
Start Date: 20/Nov/19 06:09
Worklog Time Spent: 10m 
  Work Description: bharatviswa504 commented on pull request #200: 
HDDS-2247. Delete FileEncryptionInfo from KeyInfo when a Key is deleted
URL: https://github.com/apache/hadoop-ozone/pull/200
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346478)
Time Spent: 20m  (was: 10m)

> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
> operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> {code}
> In such scenario, when KMS is enabled & GDPR enforced on a bucket, if user 
> deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
> before moving it to deletedTable, else we cannot guarantee Right to Erasure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-2247) Delete FileEncryptionInfo from KeyInfo when a Key is deleted

2019-11-19 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham resolved HDDS-2247.
--
Fix Version/s: 0.5.0
   Resolution: Fixed

> Delete FileEncryptionInfo from KeyInfo when a Key is deleted
> 
>
> Key: HDDS-2247
> URL: https://issues.apache.org/jira/browse/HDDS-2247
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Dinesh Chitlangia
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> As part of HDDS-2174 we are deleting GDPR Encryption Key on delete file 
> operation.
> However, if KMS is enabled, we are skipping GDPR Encryption Key approach when 
> writing file in a GDPR enforced Bucket.
> {code:java}
> final FileEncryptionInfo feInfo = keyOutputStream.getFileEncryptionInfo();
> if (feInfo != null) {
>   KeyProvider.KeyVersion decrypted = getDEK(feInfo);
>   final CryptoOutputStream cryptoOut =
>   new CryptoOutputStream(keyOutputStream,
>   OzoneKMSUtil.getCryptoCodec(conf, feInfo),
>   decrypted.getMaterial(), feInfo.getIV());
>   return new OzoneOutputStream(cryptoOut);
> } else {
>   try{
> GDPRSymmetricKey gk;
> Map openKeyMetadata =
> openKey.getKeyInfo().getMetadata();
> if(Boolean.valueOf(openKeyMetadata.get(OzoneConsts.GDPR_FLAG))){
>   gk = new GDPRSymmetricKey(
>   openKeyMetadata.get(OzoneConsts.GDPR_SECRET),
>   openKeyMetadata.get(OzoneConsts.GDPR_ALGORITHM)
>   );
>   gk.getCipher().init(Cipher.ENCRYPT_MODE, gk.getSecretKey());
>   return new OzoneOutputStream(
>   new CipherOutputStream(keyOutputStream, gk.getCipher()));
> }
>   }catch (Exception ex){
> throw new IOException(ex);
>   }
> {code}
> In such scenario, when KMS is enabled & GDPR enforced on a bucket, if user 
> deletes a file, we should delete the {{FileEncryptionInfo}} from KeyInfo, 
> before moving it to deletedTable, else we cannot guarantee Right to Erasure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14994) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978095#comment-16978095
 ] 

Lisheng Sun commented on HDFS-14994:


[~surendrasingh] 

i mean some one is going to only add priotiy less than 
QUEUE_WITH_CORRUPT_BLOCKS.

> Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
> 
>
> Key: HDFS-14994
> URL: https://issues.apache.org/jira/browse/HDFS-14994
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14994.001.patch, HDFS-14994.002.patch
>
>
> when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
> need replica. 
> in current code if use continue, there is one more invalid judgment (priority 
> ==QUEUE_WITH_CORRUPT_BLOCKS).
> i think it should use break instread of continue.
> {code:java}
>  */
> synchronized List> chooseLowRedundancyBlocks(
> int blocksToProcess) {
>   final List> blocksToReconstruct = new ArrayList<>(LEVEL);
>   int count = 0;
>   int priority = 0;
>   for (; count < blocksToProcess && priority < LEVEL; priority++) {
> if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
>   // do not choose corrupted blocks.
>   continue;
> }
> ...
>
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2581) Make OM Ha config to use Java Configs

2019-11-19 Thread Bharat Viswanadham (Jira)
Bharat Viswanadham created HDDS-2581:


 Summary: Make OM Ha config to use Java Configs
 Key: HDDS-2581
 URL: https://issues.apache.org/jira/browse/HDDS-2581
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Bharat Viswanadham


This Jira is created based on the comments from [~aengineer] during HDDS-2536 
review.

Can we please use the Java Configs instead of this old-style config to add a 
config?

 

This Jira it to make all HA OM config to the new style (Java config based 
approach)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2581) Make OM Ha config to use Java Configs

2019-11-19 Thread Bharat Viswanadham (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HDDS-2581:
-
Labels: newbie  (was: )

> Make OM Ha config to use Java Configs
> -
>
> Key: HDDS-2581
> URL: https://issues.apache.org/jira/browse/HDDS-2581
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Bharat Viswanadham
>Priority: Major
>  Labels: newbie
>
> This Jira is created based on the comments from [~aengineer] during HDDS-2536 
> review.
> Can we please use the Java Configs instead of this old-style config to add a 
> config?
>  
> This Jira it to make all HA OM config to the new style (Java config based 
> approach)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-2542) Transparent compression storage in HDFS

2019-11-19 Thread Xinli Shang (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-2542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978088#comment-16978088
 ] 

Xinli Shang commented on HDFS-2542:
---

Any update on this? 

> Transparent compression storage in HDFS
> ---
>
> Key: HDFS-2542
> URL: https://issues.apache.org/jira/browse/HDFS-2542
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: jinglong.liujl
>Priority: Major
> Attachments: tranparent compress storage.docx
>
>
> As HDFS-2115, we want to provide a mechanism to improve storage usage in hdfs 
> by compression. Different from HDFS-2115, this issue focus on compress 
> storage. Some idea like below:
> To do:
> 1. compress cold data.
>Cold data: After writing (or last read), data has not touched by anyone 
> for a long time.
>Hot data: After writing, many client will read it , maybe it'll delele 
> soon.
>
>Because hot data compression is not cost-effective,  we only compress cold 
> data. 
>In some cases, some data in file can be access in high frequency,  but in 
> the same file, some data may be cold data. 
> To distinguish them, we compress in block level.
> 2. compress data which has high compress ratio.
>To specify high/low compress ratio, we should try to compress data, if 
> compress ratio is too low, we'll never compress them.
> 2. forward compatibility.
> After compression, data format in datanode has changed. Old client will 
> not access them. To solve this issue, we provide a mechanism which decompress 
> on datanode.
> 3. support random access and append.
>As HDFS-2115, random access can be support by index. We separate data 
> before compress by fixed-length (we call these fixed-length data as "chunk"), 
> every chunk has its index.
> When random access, we can seek to the nearest index, and read this chunk for 
> precise position.   
> 4. async compress to avoid compression slow down running job.
>In practice, we found the cluster CPU usage is not uniform. Some clusters 
> are idle at night, and others are idle at afternoon. We should make compress 
> task running in full speed when cluster idle, and in low speed when cluster 
> busy.
> Will do:
> 1. client specific codec and support  compress transmission.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14940) HDFS Balancer : Do not allow to set balancer maximum network bandwidth more than 1TB

2019-11-19 Thread hemanthboyina (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978086#comment-16978086
 ] 

hemanthboyina commented on HDFS-14940:
--

thanks for the review  [~surendrasingh] [~ayushtkn]

updated the patch , please review .

> HDFS Balancer : Do not allow to set balancer maximum network bandwidth more 
> than 1TB
> 
>
> Key: HDFS-14940
> URL: https://issues.apache.org/jira/browse/HDFS-14940
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.1.1
> Environment: 3 Node HA Setup
>Reporter: Souryakanta Dwivedy
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: BalancerBW.PNG, HDFS-14940.001.patch, 
> HDFS-14940.002.patch, HDFS-14940.003.patch, HDFS-14940.004.patch
>
>
> HDFS Balancer : getBalancerBandwidth displaying wrong values for the maximum 
> network bandwidth used by the datanode
>  while network bandwidth set with values as 1048576000g/1048p/1e
> Steps :-        
>  * Set balancer bandwith with command setBalancerBandwidth and vlaues as 
> [1048576000g/1048p/1e]
>  * - Check bandwidth used by the datanode during HDFS block balancing with 
> command :hdfs dfsadmin -getBalancerBandwidth "    check it will display some 
> different values not the same value as set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14940) HDFS Balancer : Do not allow to set balancer maximum network bandwidth more than 1TB

2019-11-19 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina reassigned HDFS-14940:


Assignee: hemanthboyina

> HDFS Balancer : Do not allow to set balancer maximum network bandwidth more 
> than 1TB
> 
>
> Key: HDFS-14940
> URL: https://issues.apache.org/jira/browse/HDFS-14940
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.1.1
> Environment: 3 Node HA Setup
>Reporter: Souryakanta Dwivedy
>Assignee: hemanthboyina
>Priority: Minor
> Attachments: BalancerBW.PNG, HDFS-14940.001.patch, 
> HDFS-14940.002.patch, HDFS-14940.003.patch, HDFS-14940.004.patch
>
>
> HDFS Balancer : getBalancerBandwidth displaying wrong values for the maximum 
> network bandwidth used by the datanode
>  while network bandwidth set with values as 1048576000g/1048p/1e
> Steps :-        
>  * Set balancer bandwith with command setBalancerBandwidth and vlaues as 
> [1048576000g/1048p/1e]
>  * - Check bandwidth used by the datanode during HDFS block balancing with 
> command :hdfs dfsadmin -getBalancerBandwidth "    check it will display some 
> different values not the same value as set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2579) Ozone client should refresh pipeline info if reads from all Datanodes fail.

2019-11-19 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan updated HDDS-2579:

Description: 
Currently, if the client reads from all Datanodes in the pipleine fail, the 
read fails altogether. There may be a case when the container is moved to a new 
pipeline by the time client reads. In this case, the client should request for 
a refresh pipeline from OM, and read it again if the new pipeline returned from 
OM is different. 

This behavior is consistent with that of HDFS.
cc [~msingh] / [~shashikant] / [~hanishakoneru]

  was:Currently, if the client reads from all Datanodes in the pipleine fail, 
the read fails altogether. There may be a case when the container is moved to a 
new pipeline by the time client reads. In this case, the client should request 
for a refresh pipeline from OM, and read it again if the new pipeline returned 
from OM is different. 


> Ozone client should refresh pipeline info if reads from all Datanodes fail.
> ---
>
> Key: HDDS-2579
> URL: https://issues.apache.org/jira/browse/HDDS-2579
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Client
>Reporter: Aravindan Vijayan
>Assignee: Aravindan Vijayan
>Priority: Major
> Fix For: 0.5.0
>
>
> Currently, if the client reads from all Datanodes in the pipleine fail, the 
> read fails altogether. There may be a case when the container is moved to a 
> new pipeline by the time client reads. In this case, the client should 
> request for a refresh pipeline from OM, and read it again if the new pipeline 
> returned from OM is different. 
> This behavior is consistent with that of HDFS.
> cc [~msingh] / [~shashikant] / [~hanishakoneru]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14940) HDFS Balancer : Do not allow to set balancer maximum network bandwidth more than 1TB

2019-11-19 Thread hemanthboyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hemanthboyina updated HDFS-14940:
-
Attachment: HDFS-14940.004.patch

> HDFS Balancer : Do not allow to set balancer maximum network bandwidth more 
> than 1TB
> 
>
> Key: HDFS-14940
> URL: https://issues.apache.org/jira/browse/HDFS-14940
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.1.1
> Environment: 3 Node HA Setup
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: BalancerBW.PNG, HDFS-14940.001.patch, 
> HDFS-14940.002.patch, HDFS-14940.003.patch, HDFS-14940.004.patch
>
>
> HDFS Balancer : getBalancerBandwidth displaying wrong values for the maximum 
> network bandwidth used by the datanode
>  while network bandwidth set with values as 1048576000g/1048p/1e
> Steps :-        
>  * Set balancer bandwith with command setBalancerBandwidth and vlaues as 
> [1048576000g/1048p/1e]
>  * - Check bandwidth used by the datanode during HDFS block balancing with 
> command :hdfs dfsadmin -getBalancerBandwidth "    check it will display some 
> different values not the same value as set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14994) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978083#comment-16978083
 ] 

Surendra Singh Lilhore commented on HDFS-14994:
---

{quote}i think that to add one more new block priority that is less than 
QUEUE_WITH_CORRUPT_BLOCKS.
{quote}
[~leosun08], you mean, no one is going to add priotiy less than 
QUEUE_WITH_CORRUPT_BLOCKS ?

> Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
> 
>
> Key: HDFS-14994
> URL: https://issues.apache.org/jira/browse/HDFS-14994
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HDFS-14994.001.patch, HDFS-14994.002.patch
>
>
> when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
> need replica. 
> in current code if use continue, there is one more invalid judgment (priority 
> ==QUEUE_WITH_CORRUPT_BLOCKS).
> i think it should use break instread of continue.
> {code:java}
>  */
> synchronized List> chooseLowRedundancyBlocks(
> int blocksToProcess) {
>   final List> blocksToReconstruct = new ArrayList<>(LEVEL);
>   int count = 0;
>   int priority = 0;
>   for (; count < blocksToProcess && priority < LEVEL; priority++) {
> if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
>   // do not choose corrupted blocks.
>   continue;
> }
> ...
>
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2467) Allow running Freon validators with limited memory

2019-11-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2467?focusedWorklogId=346469=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346469
 ]

ASF GitHub Bot logged work on HDDS-2467:


Author: ASF GitHub Bot
Created on: 20/Nov/19 05:35
Start Date: 20/Nov/19 05:35
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #152: 
HDDS-2467. Allow running Freon validators with limited memory
URL: https://github.com/apache/hadoop-ozone/pull/152
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346469)
Time Spent: 20m  (was: 10m)

> Allow running Freon validators with limited memory
> --
>
> Key: HDDS-2467
> URL: https://issues.apache.org/jira/browse/HDDS-2467
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Freon validators read each item to be validated completely into a {{byte[]}} 
> buffer.  This allows timing only the read (and buffer allocation), but not 
> the subsequent digest calculation.  However, it also means that memory 
> required for running the validators is proportional to key size.
> I propose to add a command-line flag to allow calculating the digest while 
> reading the input stream.  This changes timing results a bit, since values 
> will include the time required for digest calculation.  On the other hand, 
> Freon will be able to validate huge keys with limited memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2467) Allow running Freon validators with limited memory

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2467:

Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Allow running Freon validators with limited memory
> --
>
> Key: HDDS-2467
> URL: https://issues.apache.org/jira/browse/HDDS-2467
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: freon
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Freon validators read each item to be validated completely into a {{byte[]}} 
> buffer.  This allows timing only the read (and buffer allocation), but not 
> the subsequent digest calculation.  However, it also means that memory 
> required for running the validators is proportional to key size.
> I propose to add a command-line flag to allow calculating the digest while 
> reading the input stream.  This changes timing results a bit, since values 
> will include the time required for digest calculation.  On the other hand, 
> Freon will be able to validate huge keys with limited memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2516) Code cleanup in EventQueue

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2516:

Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Code cleanup in EventQueue
> --
>
> Key: HDDS-2516
> URL: https://issues.apache.org/jira/browse/HDDS-2516
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available, sonar
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?fileUuids=AW5md-HgKcVY8lQ4ZrfB=hadoop-ozone=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2516) Code cleanup in EventQueue

2019-11-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2516?focusedWorklogId=346468=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346468
 ]

ASF GitHub Bot logged work on HDDS-2516:


Author: ASF GitHub Bot
Created on: 20/Nov/19 05:31
Start Date: 20/Nov/19 05:31
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #196: 
HDDS-2516. Code cleanup in EventQueue
URL: https://github.com/apache/hadoop-ozone/pull/196
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346468)
Time Spent: 20m  (was: 10m)

> Code cleanup in EventQueue
> --
>
> Key: HDDS-2516
> URL: https://issues.apache.org/jira/browse/HDDS-2516
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Attila Doroszlai
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available, sonar
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> https://sonarcloud.io/project/issues?fileUuids=AW5md-HgKcVY8lQ4ZrfB=hadoop-ozone=false



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13522) Support observer node from Router-Based Federation

2019-11-19 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978082#comment-16978082
 ] 

Surendra Singh Lilhore commented on HDFS-13522:
---

{quote}anyone interested taking this ahead?
{quote}
Thanks [~ayushtkn] for ping.
{quote}I started reading but got an initial doubt, regarding the need to split 
read and write routers. I think we can use only one kind of routers itself.
{quote}
 I am also thinking to utilize same router for observer call instead adding new 
role for router. It will increase complexity of cluster. Already HDFS 
overloaded with different role of processes.

> Support observer node from Router-Based Federation
> --
>
> Key: HDFS-13522
> URL: https://issues.apache.org/jira/browse/HDFS-13522
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: federation, namenode
>Reporter: Erik Krogen
>Assignee: Chao Sun
>Priority: Major
> Attachments: HDFS-13522.001.patch, RBF_ Observer support.pdf, 
> Router+Observer RPC clogging.png, ShortTerm-Routers+Observer.png
>
>
> Changes will need to occur to the router to support the new observer node.
> One such change will be to make the router understand the observer state, 
> e.g. {{FederationNamenodeServiceState}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2535) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2535:

Fix Version/s: 0.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-2535
> URL: https://issues.apache.org/jira/browse/HDDS-2535
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Marton Elek
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Flakiness can be reproduced locally. Usually it passes, but when I started to 
> run it 100 times parallel with high cpu load it failed with the 3rd attempt 
> (timed out)
> {code:java}
> ---
> Test set: 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 503.297 s <<< 
> FAILURE! - in 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse
> testDoubleBuffer(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse)
>   Time elapsed: 500.122 s  <<< ERROR!
> java.lang.Exception: test timed out after 50 milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382)
> at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:385)
> at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:129)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  {code}
> Independent from the flakiness I think a test where the timeout is 8 minutes 
> and starts 1000 threads to insert 500 buckets (500_000 buckets all together) 
> it's more like an integration test and would be better to move the slowest 
> part to the integration-test project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2535) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-11-19 Thread Dinesh Chitlangia (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978078#comment-16978078
 ] 

Dinesh Chitlangia commented on HDDS-2535:
-

[~elek] Thanks for reporting the flaky test, [~bharat] Thanks for the 
contribution.

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-2535
> URL: https://issues.apache.org/jira/browse/HDDS-2535
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Marton Elek
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Flakiness can be reproduced locally. Usually it passes, but when I started to 
> run it 100 times parallel with high cpu load it failed with the 3rd attempt 
> (timed out)
> {code:java}
> ---
> Test set: 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 503.297 s <<< 
> FAILURE! - in 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse
> testDoubleBuffer(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse)
>   Time elapsed: 500.122 s  <<< ERROR!
> java.lang.Exception: test timed out after 50 milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382)
> at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:385)
> at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:129)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  {code}
> Independent from the flakiness I think a test where the timeout is 8 minutes 
> and starts 1000 threads to insert 500 buckets (500_000 buckets all together) 
> it's more like an integration test and would be better to move the slowest 
> part to the integration-test project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2535) TestOzoneManagerDoubleBufferWithOMResponse is flaky

2019-11-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2535?focusedWorklogId=346467=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346467
 ]

ASF GitHub Bot logged work on HDDS-2535:


Author: ASF GitHub Bot
Created on: 20/Nov/19 05:25
Start Date: 20/Nov/19 05:25
Worklog Time Spent: 10m 
  Work Description: dineshchitlangia commented on pull request #216: 
HDDS-2535. TestOzoneManagerDoubleBufferWithOMResponse is flaky.
URL: https://github.com/apache/hadoop-ozone/pull/216
 
 
   
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346467)
Time Spent: 20m  (was: 10m)

> TestOzoneManagerDoubleBufferWithOMResponse is flaky
> ---
>
> Key: HDDS-2535
> URL: https://issues.apache.org/jira/browse/HDDS-2535
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Reporter: Marton Elek
>Assignee: Bharat Viswanadham
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Flakiness can be reproduced locally. Usually it passes, but when I started to 
> run it 100 times parallel with high cpu load it failed with the 3rd attempt 
> (timed out)
> {code:java}
> ---
> Test set: 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse
> ---
> Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 503.297 s <<< 
> FAILURE! - in 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse
> testDoubleBuffer(org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse)
>   Time elapsed: 500.122 s  <<< ERROR!
> java.lang.Exception: test timed out after 50 milliseconds
> at java.lang.Thread.sleep(Native Method)
> at 
> org.apache.hadoop.test.GenericTestUtils.waitFor(GenericTestUtils.java:382)
> at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:385)
> at 
> org.apache.hadoop.ozone.om.ratis.TestOzoneManagerDoubleBufferWithOMResponse.testDoubleBuffer(TestOzoneManagerDoubleBufferWithOMResponse.java:129)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at 
> org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
>  {code}
> Independent from the flakiness I think a test where the timeout is 8 minutes 
> and starts 1000 threads to insert 500 buckets (500_000 buckets all together) 
> it's more like an integration test and would be better to move the slowest 
> part to the integration-test project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14940) HDFS Balancer : Do not allow to set balancer maximum network bandwidth more than 1TB

2019-11-19 Thread Surendra Singh Lilhore (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978071#comment-16978071
 ] 

Surendra Singh Lilhore commented on HDFS-14940:
---

lets add it in server, [~hemanthboyina] please update the patch.

> HDFS Balancer : Do not allow to set balancer maximum network bandwidth more 
> than 1TB
> 
>
> Key: HDFS-14940
> URL: https://issues.apache.org/jira/browse/HDFS-14940
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: balancer  mover
>Affects Versions: 3.1.1
> Environment: 3 Node HA Setup
>Reporter: Souryakanta Dwivedy
>Priority: Minor
> Attachments: BalancerBW.PNG, HDFS-14940.001.patch, 
> HDFS-14940.002.patch, HDFS-14940.003.patch
>
>
> HDFS Balancer : getBalancerBandwidth displaying wrong values for the maximum 
> network bandwidth used by the datanode
>  while network bandwidth set with values as 1048576000g/1048p/1e
> Steps :-        
>  * Set balancer bandwith with command setBalancerBandwidth and vlaues as 
> [1048576000g/1048p/1e]
>  * - Check bandwidth used by the datanode during HDFS block balancing with 
> command :hdfs dfsadmin -getBalancerBandwidth "    check it will display some 
> different values not the same value as set



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14997) BPServiceActor process command from NameNode asynchronously

2019-11-19 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-14997:
---
Attachment: HDFS-14997.001.patch

> BPServiceActor process command from NameNode asynchronously
> ---
>
> Key: HDFS-14997
> URL: https://issues.apache.org/jira/browse/HDFS-14997
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-14997.001.patch
>
>
> There are two core functions, report(#sendHeartbeat, #blockReport, 
> #cacheReport) and #processCommand in #BPServiceActor main process flow. If 
> processCommand cost long time it will block send report flow. Meanwhile 
> processCommand could cost long time(over 1000s the worst case I meet) when IO 
> load  of DataNode is very high. Since some IO operations are under 
> #datasetLock, So it has to wait to acquire #datasetLock long time when 
> process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat 
> will not send to NameNode in-time, and trigger other disasters.
> I propose to improve #processCommand asynchronously and not block 
> #BPServiceActor to send heartbeat back to NameNode when meet high IO load.
> Notes:
> 1. Lifeline could be one effective solution, however some old branches are 
> not support this feature.
> 2. IO operations under #datasetLock is another issue, I think we should solve 
> it at another JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14997) BPServiceActor process command from NameNode asynchronously

2019-11-19 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-14997:
---
Attachment: (was: HDFS-14997.001.patch)

> BPServiceActor process command from NameNode asynchronously
> ---
>
> Key: HDFS-14997
> URL: https://issues.apache.org/jira/browse/HDFS-14997
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
>
> There are two core functions, report(#sendHeartbeat, #blockReport, 
> #cacheReport) and #processCommand in #BPServiceActor main process flow. If 
> processCommand cost long time it will block send report flow. Meanwhile 
> processCommand could cost long time(over 1000s the worst case I meet) when IO 
> load  of DataNode is very high. Since some IO operations are under 
> #datasetLock, So it has to wait to acquire #datasetLock long time when 
> process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat 
> will not send to NameNode in-time, and trigger other disasters.
> I propose to improve #processCommand asynchronously and not block 
> #BPServiceActor to send heartbeat back to NameNode when meet high IO load.
> Notes:
> 1. Lifeline could be one effective solution, however some old branches are 
> not support this feature.
> 2. IO operations under #datasetLock is another issue, I think we should solve 
> it at another JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14997) BPServiceActor process command from NameNode asynchronously

2019-11-19 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-14997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-14997:
---
Attachment: HDFS-14997.001.patch
  Assignee: Xiaoqiao He  (was: Aiphago)
Status: Patch Available  (was: Open)

submit demo patch and change to process commands asynchronously.

> BPServiceActor process command from NameNode asynchronously
> ---
>
> Key: HDFS-14997
> URL: https://issues.apache.org/jira/browse/HDFS-14997
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: datanode
>Reporter: Xiaoqiao He
>Assignee: Xiaoqiao He
>Priority: Major
> Attachments: HDFS-14997.001.patch
>
>
> There are two core functions, report(#sendHeartbeat, #blockReport, 
> #cacheReport) and #processCommand in #BPServiceActor main process flow. If 
> processCommand cost long time it will block send report flow. Meanwhile 
> processCommand could cost long time(over 1000s the worst case I meet) when IO 
> load  of DataNode is very high. Since some IO operations are under 
> #datasetLock, So it has to wait to acquire #datasetLock long time when 
> process some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat 
> will not send to NameNode in-time, and trigger other disasters.
> I propose to improve #processCommand asynchronously and not block 
> #BPServiceActor to send heartbeat back to NameNode when meet high IO load.
> Notes:
> 1. Lifeline could be one effective solution, however some old branches are 
> not support this feature.
> 2. IO operations under #datasetLock is another issue, I think we should solve 
> it at another JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14651) DeadNodeDetector checks dead node periodically

2019-11-19 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978063#comment-16978063
 ] 

Hadoop QA commented on HDFS-14651:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m  
8s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 48s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 
30 unchanged - 0 fixed = 31 total (was 30) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 27s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m  9s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}174m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDecommissionWithStriped |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.5 Server=19.03.5 Image:yetus/hadoop:104ccca9169 |
| JIRA Issue | HDFS-14651 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12986291/HDFS-14651.005.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 508e2f055d61 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 
05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Created] (HDDS-2580) Sonar: Close resources in xxxKeyHandler

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2580:
---

 Summary: Sonar: Close resources in xxxKeyHandler
 Key: HDDS-2580
 URL: https://issues.apache.org/jira/browse/HDDS-2580
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
Reporter: Dinesh Chitlangia


Use try-with-resources or close this "FileOutputStream" in a "finally" clause.

GetKeyHandler: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6HHKTfdBVcJdcVFsvC=AW6HHKTfdBVcJdcVFsvC]

 

Use try-with-resources or close this "OzoneOutputStream" in a "finally" clause.

PutKeyHandler: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6HHKRodBVcJdcVFsvB=AW6HHKRodBVcJdcVFsvB]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2504) Handle InterruptedException properly

2019-11-19 Thread Aravindan Vijayan (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16978061#comment-16978061
 ] 

Aravindan Vijayan commented on HDDS-2504:
-

Thank you [~adoroszlai] for filing this umbrella task. 

> Handle InterruptedException properly
> 
>
> Key: HDDS-2504
> URL: https://issues.apache.org/jira/browse/HDDS-2504
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Priority: Major
>  Labels: newbie, sonar
>
> {quote}Either re-interrupt or rethrow the {{InterruptedException}}
> {quote}
> in several files (42 issues)
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2573) Handle InterruptedException in KeyOutputStream

2019-11-19 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-2573:
---

Assignee: Aravindan Vijayan

> Handle InterruptedException in KeyOutputStream
> --
>
> Key: HDDS-2573
> URL: https://issues.apache.org/jira/browse/HDDS-2573
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2577) Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB

2019-11-19 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-2577:
---

Assignee: Aravindan Vijayan

> Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB
> -
>
> Key: HDDS-2577
> URL: https://issues.apache.org/jira/browse/HDDS-2577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2571) Handle InterruptedException in SCMPipelineManager

2019-11-19 Thread Aravindan Vijayan (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aravindan Vijayan reassigned HDDS-2571:
---

Assignee: Aravindan Vijayan

> Handle InterruptedException in SCMPipelineManager
> -
>
> Key: HDDS-2571
> URL: https://issues.apache.org/jira/browse/HDDS-2571
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Assignee: Aravindan Vijayan
>Priority: Major
>  Labels: newbie, sonar
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2579) Ozone client should refresh pipeline info if reads from all Datanodes fail.

2019-11-19 Thread Aravindan Vijayan (Jira)
Aravindan Vijayan created HDDS-2579:
---

 Summary: Ozone client should refresh pipeline info if reads from 
all Datanodes fail.
 Key: HDDS-2579
 URL: https://issues.apache.org/jira/browse/HDDS-2579
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Client
Reporter: Aravindan Vijayan
Assignee: Aravindan Vijayan
 Fix For: 0.5.0


Currently, if the client reads from all Datanodes in the pipleine fail, the 
read fails altogether. There may be a case when the container is moved to a new 
pipeline by the time client reads. In this case, the client should request for 
a refresh pipeline from OM, and read it again if the new pipeline returned from 
OM is different. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2554) Sonar: Null pointers should not be dereferenced

2019-11-19 Thread Shweta (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HDDS-2554:


Assignee: Shweta

> Sonar: Null pointers should not be dereferenced
> ---
>
> Key: HDDS-2554
> URL: https://issues.apache.org/jira/browse/HDDS-2554
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>Reporter: Dinesh Chitlangia
>Assignee: Shweta
>Priority: Major
>  Labels: newbie
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuP1m2E_7tGaNiTf=AW6BMuP1m2E_7tGaNiTf]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2578) Handle InterruptedException in Freon package

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2578:

Description: 
BaseFreonGenerator: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]

 

RandomKeyGenerator: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]

 

ProgressBar: 3 instances listed below

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]

 

  was:
BaseFreonGenerator: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]

 

ProgressBar: 3 instances listed below

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]

 


> Handle InterruptedException in Freon package
> 
>
> Key: HDDS-2578
> URL: https://issues.apache.org/jira/browse/HDDS-2578
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> BaseFreonGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]
>  
> RandomKeyGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cqKcVY8lQ4Zr3f=AW5md-cqKcVY8lQ4Zr3f]
>  
> ProgressBar: 3 instances listed below
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2504) Handle InterruptedException properly

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2504:

Description: 
{quote}Either re-interrupt or rethrow the {{InterruptedException}}
{quote}
in several files (42 issues)

[https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]

 

  was:
{quote}Either re-interrupt or rethrow the {{InterruptedException}}
{quote}
in several files (42 issues)

[https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]

Feel free to create sub-tasks if needed.


> Handle InterruptedException properly
> 
>
> Key: HDDS-2504
> URL: https://issues.apache.org/jira/browse/HDDS-2504
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Priority: Major
>  Labels: newbie, sonar
>
> {quote}Either re-interrupt or rethrow the {{InterruptedException}}
> {quote}
> in several files (42 issues)
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2578) Handle InterruptedException in Freon package

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2578:

Description: 
BaseFreonGenerator: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]

 

ProgressBar: 3 instances listed below

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]

 

  
was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l


> Handle InterruptedException in Freon package
> 
>
> Key: HDDS-2578
> URL: https://issues.apache.org/jira/browse/HDDS-2578
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> BaseFreonGenerator: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cgKcVY8lQ4Zr3D=AW5md-cgKcVY8lQ4Zr3D]
>  
> ProgressBar: 3 instances listed below
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3n=AW5md-c6KcVY8lQ4Zr3n]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3o=AW5md-c6KcVY8lQ4Zr3o]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-c6KcVY8lQ4Zr3p=AW5md-c6KcVY8lQ4Zr3p]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2578) Handle InterruptedException in Freon package

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2578:
---

 Summary: Handle InterruptedException in Freon package
 Key: HDDS-2578
 URL: https://issues.apache.org/jira/browse/HDDS-2578
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2577) Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2577:
---

 Summary: Handle InterruptedException in 
OzoneManagerProtocolServerSideTranslatorPB
 Key: HDDS-2577
 URL: https://issues.apache.org/jira/browse/HDDS-2577
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


OzoneManagerDoubleBuffer: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu]

OzoneManagerRatisClient: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2577) Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2577:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l
  (was: OzoneManagerDoubleBuffer: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu]

OzoneManagerRatisClient: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf]

 )

> Handle InterruptedException in OzoneManagerProtocolServerSideTranslatorPB
> -
>
> Key: HDDS-2577
> URL: https://issues.apache.org/jira/browse/HDDS-2577
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-Z7KcVY8lQ4Zr1l=AW5md-Z7KcVY8lQ4Zr1l



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2576) Handle InterruptedException in ratis related files

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2576:

Description: 
OzoneManagerDoubleBuffer: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu]

OzoneManagerRatisClient: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf]

 

  
was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH


> Handle InterruptedException in ratis related files
> --
>
> Key: HDDS-2576
> URL: https://issues.apache.org/jira/browse/HDDS-2576
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> OzoneManagerDoubleBuffer: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VxKcVY8lQ4Zrtu=AW5md-VxKcVY8lQ4Zrtu]
> OzoneManagerRatisClient: 
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-VsKcVY8lQ4Zrtf=AW5md-VsKcVY8lQ4Zrtf]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2576) Handle InterruptedException in ratis related files

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2576:
---

 Summary: Handle InterruptedException in ratis related files
 Key: HDDS-2576
 URL: https://issues.apache.org/jira/browse/HDDS-2576
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2575) Handle InterruptedException in LogSubcommand

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2575?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2575:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH
  (was: Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67]

 )

> Handle InterruptedException in LogSubcommand
> 
>
> Key: HDDS-2575
> URL: https://issues.apache.org/jira/browse/HDDS-2575
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-mpKcVY8lQ4ZsAH=AW5md-mpKcVY8lQ4ZsAH



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2575) Handle InterruptedException in LogSubcommand

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2575:
---

 Summary: Handle InterruptedException in LogSubcommand
 Key: HDDS-2575
 URL: https://issues.apache.org/jira/browse/HDDS-2575
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2574) Handle InterruptedException in OzoneDelegationTokenSecretManager

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2574:

Description: 
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67]

 

  
was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc


> Handle InterruptedException in OzoneDelegationTokenSecretManager
> 
>
> Key: HDDS-2574
> URL: https://issues.apache.org/jira/browse/HDDS-2574
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr64=AW5md-gpKcVY8lQ4Zr64]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-gpKcVY8lQ4Zr67=AW5md-gpKcVY8lQ4Zr67]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2574) Handle InterruptedException in OzoneDelegationTokenSecretManager

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2574:
---

 Summary: Handle InterruptedException in 
OzoneDelegationTokenSecretManager
 Key: HDDS-2574
 URL: https://issues.apache.org/jira/browse/HDDS-2574
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2573) Handle InterruptedException in KeyOutputStream

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2573:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc
  (was: Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi]

 

 )

> Handle InterruptedException in KeyOutputStream
> --
>
> Key: HDDS-2573
> URL: https://issues.apache.org/jira/browse/HDDS-2573
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-m5KcVY8lQ4ZsAc=AW5md-m5KcVY8lQ4ZsAc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2573) Handle InterruptedException in KeyOutputStream

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2573:
---

 Summary: Handle InterruptedException in KeyOutputStream
 Key: HDDS-2573
 URL: https://issues.apache.org/jira/browse/HDDS-2573
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi]

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2572:

Description: 
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi]

 

 

  was:
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh]

 


> Handle InterruptedException in SCMSecurityProtocolServer
> 
>
> Key: HDDS-2572
> URL: https://issues.apache.org/jira/browse/HDDS-2572
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEg=AW5md-tDKcVY8lQ4ZsEg]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-tDKcVY8lQ4ZsEi=AW5md-tDKcVY8lQ4ZsEi]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2572) Handle InterruptedException in SCMSecurityProtocolServer

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2572:
---

 Summary: Handle InterruptedException in SCMSecurityProtocolServer
 Key: HDDS-2572
 URL: https://issues.apache.org/jira/browse/HDDS-2572
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2571) Handle InterruptedException in SCMPipelineManager

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2571:
---

 Summary: Handle InterruptedException in SCMPipelineManager
 Key: HDDS-2571
 URL: https://issues.apache.org/jira/browse/HDDS-2571
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2571) Handle InterruptedException in SCMPipelineManager

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2571:

Description: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh]

 

  
was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ


> Handle InterruptedException in SCMPipelineManager
> -
>
> Key: HDDS-2571
> URL: https://issues.apache.org/jira/browse/HDDS-2571
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuREm2E_7tGaNiTh=AW6BMuREm2E_7tGaNiTh]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2569) Handle InterruptedException in LogStreamServlet

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2569:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf
  (was: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh

 )

> Handle InterruptedException in LogStreamServlet
> ---
>
> Key: HDDS-2569
> URL: https://issues.apache.org/jira/browse/HDDS-2569
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2570) Handle InterruptedException in ProfileServlet

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2570:
---

 Summary: Handle InterruptedException in ProfileServlet
 Key: HDDS-2570
 URL: https://issues.apache.org/jira/browse/HDDS-2570
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2570) Handle InterruptedException in ProfileServlet

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2570:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ
  (was: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-yJKcVY8lQ4ZsIf=AW5md-yJKcVY8lQ4ZsIf)

> Handle InterruptedException in ProfileServlet
> -
>
> Key: HDDS-2570
> URL: https://issues.apache.org/jira/browse/HDDS-2570
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-x8KcVY8lQ4ZsIJ=AW5md-x8KcVY8lQ4ZsIJ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2568) Handle InterruptedException in OzoneContainer

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2568:
---

 Summary: Handle InterruptedException in OzoneContainer
 Key: HDDS-2568
 URL: https://issues.apache.org/jira/browse/HDDS-2568
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix 2 instances:

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk]

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2569) Handle InterruptedException in LogStreamServlet

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2569:
---

 Summary: Handle InterruptedException in LogStreamServlet
 Key: HDDS-2569
 URL: https://issues.apache.org/jira/browse/HDDS-2569
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2568) Handle InterruptedException in OzoneContainer

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2568:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh

 

  was:
Fix 2 instances:

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk]

 

 

 

 


> Handle InterruptedException in OzoneContainer
> -
>
> Key: HDDS-2568
> URL: https://issues.apache.org/jira/browse/HDDS-2568
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9sKcVY8lQ4ZsUh=AW5md-9sKcVY8lQ4ZsUh
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2567) Handle InterruptedException in ContainerMetadataScanner

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2567:

Description: 
Fix 2 instances:

https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk]

 

 

 

 

  was:
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb]

 

 

 


> Handle InterruptedException in ContainerMetadataScanner
> ---
>
> Key: HDDS-2567
> URL: https://issues.apache.org/jira/browse/HDDS-2567
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUj=AW5md-9vKcVY8lQ4ZsUj
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9vKcVY8lQ4ZsUk=AW5md-9vKcVY8lQ4ZsUk]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2566) Handle InterruptedException in ContainerDataScanner

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2566:

Description: 
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ]

 

 

  
was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9


> Handle InterruptedException in ContainerDataScanner
> ---
>
> Key: HDDS-2566
> URL: https://issues.apache.org/jira/browse/HDDS-2566
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2567) Handle InterruptedException in ContainerMetadataScanner

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2567:
---

 Summary: Handle InterruptedException in ContainerMetadataScanner
 Key: HDDS-2567
 URL: https://issues.apache.org/jira/browse/HDDS-2567
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb]

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2566) Handle InterruptedException in ContainerDataScanner

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2566:

Description: 
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb]

 

 

 

  was:
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ]

 

 


> Handle InterruptedException in ContainerDataScanner
> ---
>
> Key: HDDS-2566
> URL: https://issues.apache.org/jira/browse/HDDS-2566
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUZ=AW5md-9kKcVY8lQ4ZsUZ]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-9kKcVY8lQ4ZsUb=AW5md-9kKcVY8lQ4ZsUb]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2566) Handle InterruptedException in ContainerDataScanner

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2566:
---

 Summary: Handle InterruptedException in ContainerDataScanner
 Key: HDDS-2566
 URL: https://issues.apache.org/jira/browse/HDDS-2566
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2564) Handle InterruptedException in ContainerStateMachine

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2564:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRV=AW5md-65KcVY8lQ4ZsRV

 

 

  was:
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC]

 


> Handle InterruptedException in ContainerStateMachine
> 
>
> Key: HDDS-2564
> URL: https://issues.apache.org/jira/browse/HDDS-2564
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-65KcVY8lQ4ZsRV=AW5md-65KcVY8lQ4ZsRV
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2565) Handle InterruptedException in VolumeSet

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2565:
---

 Summary: Handle InterruptedException in VolumeSet
 Key: HDDS-2565
 URL: https://issues.apache.org/jira/browse/HDDS-2565
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2565) Handle InterruptedException in VolumeSet

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2565:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9
  (was: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC]

 )

> Handle InterruptedException in VolumeSet
> 
>
> Key: HDDS-2565
> URL: https://issues.apache.org/jira/browse/HDDS-2565
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7yKcVY8lQ4ZsR9=AW5md-7yKcVY8lQ4ZsR9



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2564) Handle InterruptedException in ContainerStateMachine

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2564:
---

 Summary: Handle InterruptedException in ContainerStateMachine
 Key: HDDS-2564
 URL: https://issues.apache.org/jira/browse/HDDS-2564
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2563) Handle InterruptedException in RunningDatanodeState

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2563:

Description: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC]

 

  was:
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx]

 


> Handle InterruptedException in RunningDatanodeState
> ---
>
> Key: HDDS-2563
> URL: https://issues.apache.org/jira/browse/HDDS-2563
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-6pKcVY8lQ4ZsRC=AW5md-6pKcVY8lQ4ZsRC]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2563) Handle InterruptedException in RunningDatanodeState

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2563:
---

 Summary: Handle InterruptedException in RunningDatanodeState
 Key: HDDS-2563
 URL: https://issues.apache.org/jira/browse/HDDS-2563
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14997) BPServiceActor process command from NameNode asynchronously

2019-11-19 Thread Xiaoqiao He (Jira)
Xiaoqiao He created HDFS-14997:
--

 Summary: BPServiceActor process command from NameNode 
asynchronously
 Key: HDFS-14997
 URL: https://issues.apache.org/jira/browse/HDFS-14997
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: datanode
Reporter: Xiaoqiao He
Assignee: Aiphago


There are two core functions, report(#sendHeartbeat, #blockReport, 
#cacheReport) and #processCommand in #BPServiceActor main process flow. If 
processCommand cost long time it will block send report flow. Meanwhile 
processCommand could cost long time(over 1000s the worst case I meet) when IO 
load  of DataNode is very high. Since some IO operations are under 
#datasetLock, So it has to wait to acquire #datasetLock long time when process 
some of commands(such as #DNA_INVALIDATE). In such case, #heartbeat will not 
send to NameNode in-time, and trigger other disasters.
I propose to improve #processCommand asynchronously and not block 
#BPServiceActor to send heartbeat back to NameNode when meet high IO load.
Notes:
1. Lifeline could be one effective solution, however some old branches are not 
support this feature.
2. IO operations under #datasetLock is another issue, I think we should solve 
it at another JIRA.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2562) Handle InterruptedException in DatanodeStateMachine

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2562:

Description: 
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx]

 

  
was:https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj


> Handle InterruptedException in DatanodeStateMachine
> ---
>
> Key: HDDS-2562
> URL: https://issues.apache.org/jira/browse/HDDS-2562
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRv=AW5md-7fKcVY8lQ4ZsRv]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-7fKcVY8lQ4ZsRx=AW5md-7fKcVY8lQ4ZsRx]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2562) Handle InterruptedException in DatanodeStateMachine

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2562:
---

 Summary: Handle InterruptedException in DatanodeStateMachine
 Key: HDDS-2562
 URL: https://issues.apache.org/jira/browse/HDDS-2562
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2561) Handle InterruptedException in LeaseManager

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2561:

Description: 
https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj
  (was: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH]

 )

> Handle InterruptedException in LeaseManager
> ---
>
> Key: HDDS-2561
> URL: https://issues.apache.org/jira/browse/HDDS-2561
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-zSKcVY8lQ4ZsJj=AW5md-zSKcVY8lQ4ZsJj



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2561) Handle InterruptedException in LeaseManager

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2561:
---

 Summary: Handle InterruptedException in LeaseManager
 Key: HDDS-2561
 URL: https://issues.apache.org/jira/browse/HDDS-2561
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2560) Handle InterruptedException in Scheduler

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2560:
---

 Summary: Handle InterruptedException in Scheduler
 Key: HDDS-2560
 URL: https://issues.apache.org/jira/browse/HDDS-2560
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV]

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2560) Handle InterruptedException in Scheduler

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2560:

Description: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH]

 

  was:
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV]

 

 

 

 


> Handle InterruptedException in Scheduler
> 
>
> Key: HDDS-2560
> URL: https://issues.apache.org/jira/browse/HDDS-2560
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-0nKcVY8lQ4ZsLH=AW5md-0nKcVY8lQ4ZsLH]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2559) Handle InterruptedException in BackgroundService

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2559:
---

 Summary: Handle InterruptedException in BackgroundService
 Key: HDDS-2559
 URL: https://issues.apache.org/jira/browse/HDDS-2559
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix 2 instances:

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX]

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2559) Handle InterruptedException in BackgroundService

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2559?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2559:

Description: 
Fix 2 instances:

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV]

 

 

 

 

  was:
Fix 2 instances:

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX]

 

 


> Handle InterruptedException in BackgroundService
> 
>
> Key: HDDS-2559
> URL: https://issues.apache.org/jira/browse/HDDS-2559
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLU=AW5md-02KcVY8lQ4ZsLU]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-02KcVY8lQ4ZsLV=AW5md-02KcVY8lQ4ZsLV]
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2558) Handle InterruptedException in XceiverClientSpi

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2558:

Description: 
Fix 2 instances:

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX]

 

 

  was:
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw]

 


> Handle InterruptedException in XceiverClientSpi
> ---
>
> Key: HDDS-2558
> URL: https://issues.apache.org/jira/browse/HDDS-2558
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix 2 instances:
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNW=AW5md-2aKcVY8lQ4ZsNW]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-2aKcVY8lQ4ZsNX=AW5md-2aKcVY8lQ4ZsNX]
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2558) Handle InterruptedException in XceiverClientSpi

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2558:
---

 Summary: Handle InterruptedException in XceiverClientSpi
 Key: HDDS-2558
 URL: https://issues.apache.org/jira/browse/HDDS-2558
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2557) Handle InterruptedException in CommitWatcher

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2557:

Description: 
[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw]

 

  was:
Fix these 5 instances

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl]

 


> Handle InterruptedException in CommitWatcher
> 
>
> Key: HDDS-2557
> URL: https://issues.apache.org/jira/browse/HDDS-2557
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_8KcVY8lQ4ZsVw=AW5md-_8KcVY8lQ4ZsVw]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2557) Handle InterruptedException in CommitWatcher

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2557:
---

 Summary: Handle InterruptedException in CommitWatcher
 Key: HDDS-2557
 URL: https://issues.apache.org/jira/browse/HDDS-2557
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix these 5 instances

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl]

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2556) Handle InterruptedException in BlockOutputStream

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2556:

Description: 
Fix these 5 instances

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl]

 

  was:
Fix these 3 instances

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV5=AW5md_AGKcVY8lQ4ZsV5]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV6=AW5md_AGKcVY8lQ4ZsV6]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]


> Handle InterruptedException in BlockOutputStream
> 
>
> Key: HDDS-2556
> URL: https://issues.apache.org/jira/browse/HDDS-2556
> Project: Hadoop Distributed Data Store
>  Issue Type: Sub-task
>Reporter: Dinesh Chitlangia
>Priority: Major
>  Labels: newbie, sonar
>
> Fix these 5 instances
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVe=AW5md-_2KcVY8lQ4ZsVe]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVf=AW5md-_2KcVY8lQ4ZsVf]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVh=AW5md-_2KcVY8lQ4ZsVh|https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVi=AW5md-_2KcVY8lQ4ZsVi]
>  
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVl=AW5md-_2KcVY8lQ4ZsVl]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2556) Handle InterruptedException in BlockOutputStream

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2556:
---

 Summary: Handle InterruptedException in BlockOutputStream
 Key: HDDS-2556
 URL: https://issues.apache.org/jira/browse/HDDS-2556
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix these 3 instances

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV5=AW5md_AGKcVY8lQ4ZsV5]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV6=AW5md_AGKcVY8lQ4ZsV6]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2555) Handle InterruptedException in XceiverClientGrpc

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2555:
---

 Summary: Handle InterruptedException in XceiverClientGrpc
 Key: HDDS-2555
 URL: https://issues.apache.org/jira/browse/HDDS-2555
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: Dinesh Chitlangia


Fix these 3 instances

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV5=AW5md_AGKcVY8lQ4ZsV5]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV6=AW5md_AGKcVY8lQ4ZsV6]

 

[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md_AGKcVY8lQ4ZsV9=AW5md_AGKcVY8lQ4ZsV9]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2504) Handle InterruptedException properly

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2504:

Labels: newbie sonar  (was: sonar)

> Handle InterruptedException properly
> 
>
> Key: HDDS-2504
> URL: https://issues.apache.org/jira/browse/HDDS-2504
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Priority: Major
>  Labels: newbie, sonar
>
> {quote}Either re-interrupt or rethrow the {{InterruptedException}}
> {quote}
> in several files (42 issues)
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]
> Feel free to create sub-tasks if needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2504) Handle InterruptedException properly

2019-11-19 Thread Dinesh Chitlangia (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dinesh Chitlangia updated HDDS-2504:

Description: 
{quote}Either re-interrupt or rethrow the {{InterruptedException}}
{quote}
in several files (42 issues)

[https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]

Feel free to create sub-tasks if needed.

  was:
bq. Either re-interrupt or rethrow the {{InterruptedException}}

in several files (39 issues)

https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG

Feel free to create sub-tasks if needed.


> Handle InterruptedException properly
> 
>
> Key: HDDS-2504
> URL: https://issues.apache.org/jira/browse/HDDS-2504
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>Reporter: Attila Doroszlai
>Priority: Major
>  Labels: sonar
>
> {quote}Either re-interrupt or rethrow the {{InterruptedException}}
> {quote}
> in several files (42 issues)
> [https://sonarcloud.io/project/issues?id=hadoop-ozone=false=squid%3AS2142=OPEN=BUG]
> Feel free to create sub-tasks if needed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2554) Sonar: Null pointers should not be dereferenced

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2554:
---

 Summary: Sonar: Null pointers should not be dereferenced
 Key: HDDS-2554
 URL: https://issues.apache.org/jira/browse/HDDS-2554
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Dinesh Chitlangia


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMuP1m2E_7tGaNiTf=AW6BMuP1m2E_7tGaNiTf]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2553) Sonar: Iterator.next() methods should throw NoSuchElementException

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2553:
---

 Summary: Sonar: Iterator.next() methods should throw 
NoSuchElementException
 Key: HDDS-2553
 URL: https://issues.apache.org/jira/browse/HDDS-2553
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Dinesh Chitlangia


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW6BMujFm2E_7tGaNiTl=AW6BMujFm2E_7tGaNiTl]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDDS-2531) Sonar : remove duplicate string literals in BlockOutputStream

2019-11-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2531?focusedWorklogId=346443=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-346443
 ]

ASF GitHub Bot logged work on HDDS-2531:


Author: ASF GitHub Bot
Created on: 20/Nov/19 03:36
Start Date: 20/Nov/19 03:36
Worklog Time Spent: 10m 
  Work Description: shwetayakkali commented on pull request #234: 
HDDS-2531. Sonar : remove duplicate string literals in BlockOutputStream
URL: https://github.com/apache/hadoop-ozone/pull/234
 
 
   ## What changes were proposed in this pull request?
   Refactored code to fix issues prompted by Sonar.
   
   ## What is the link to the Apache JIRA
   https://issues.apache.org/jira/browse/HDDS-2531
   ## How was this patch tested?
   Syntax changes and logging format changes. 
 

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 346443)
Remaining Estimate: 0h
Time Spent: 10m

> Sonar : remove duplicate string literals in BlockOutputStream
> -
>
> Key: HDDS-2531
> URL: https://issues.apache.org/jira/browse/HDDS-2531
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Shweta
>Priority: Minor
>  Labels: pull-request-available, sonar
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Sonar issue in executePutBlock, duplicate string literal "blockID" :
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_1KcVY8lQ4ZsVa=AW5md-_1KcVY8lQ4ZsVa
> format specifiers in Log:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVg=AW5md-_2KcVY8lQ4ZsVg
> define string constant instead of duplicate string literals.
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVb=AW5md-_2KcVY8lQ4ZsVb



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2531) Sonar : remove duplicate string literals in BlockOutputStream

2019-11-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDDS-2531:
-
Labels: pull-request-available sonar  (was: sonar)

> Sonar : remove duplicate string literals in BlockOutputStream
> -
>
> Key: HDDS-2531
> URL: https://issues.apache.org/jira/browse/HDDS-2531
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Client
>Reporter: Supratim Deka
>Assignee: Shweta
>Priority: Minor
>  Labels: pull-request-available, sonar
>
> Sonar issue in executePutBlock, duplicate string literal "blockID" :
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_1KcVY8lQ4ZsVa=AW5md-_1KcVY8lQ4ZsVa
> format specifiers in Log:
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVg=AW5md-_2KcVY8lQ4ZsVg
> define string constant instead of duplicate string literals.
> https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-_2KcVY8lQ4ZsVb=AW5md-_2KcVY8lQ4ZsVb



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2552) Sonar: Save and reuse Random object

2019-11-19 Thread Dinesh Chitlangia (Jira)
Dinesh Chitlangia created HDDS-2552:
---

 Summary: Sonar: Save and reuse Random object
 Key: HDDS-2552
 URL: https://issues.apache.org/jira/browse/HDDS-2552
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Dinesh Chitlangia
Assignee: Shweta


[https://sonarcloud.io/project/issues?id=hadoop-ozone=AW5md-cLKcVY8lQ4Zr2o=AW5md-cLKcVY8lQ4Zr2o]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



  1   2   3   4   >