[jira] [Commented] (HADOOP-17439) No shade guava in trunk

2020-12-19 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17252343#comment-17252343
 ] 

Lisheng Sun commented on HADOOP-17439:
--

Hi [~weichiu]   I learned from my colleague in charge of hive that hive 
classpath includes hadoop classpath.  

So it will cause the problem of relying on three-party package conflict. 

I understand this is a general problem.

 

 

 

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17440) Downgrade guava version in trunk

2020-12-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-17440:
-
Description: 
See details the Jira HADOOP-17439  comments.
h1.  

 

  was:
See details the Jira HADOOP-17439   comments.
h1.  

 


> Downgrade guava version in trunk
> 
>
> Key: HADOOP-17440
> URL: https://issues.apache.org/jira/browse/HADOOP-17440
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-17440.001.patch
>
>
> See details the Jira HADOOP-17439  comments.
> h1.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17252091#comment-17252091
 ] 

Lisheng Sun commented on HADOOP-17439:
--

Thank [~ste...@apache.org] for your attention to the issue.

The problem I have is that hive's lib includes guava-11.jar, hadoop's lib 
includes guava-27.0-jre.jar and live's classpath includes hadoop' s classpath.
When the method of guava is used in hive, the method cannot be found due to 
incompatible guava versions.

Other dependent components will also encounter similar problems.

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17440) Downgrade guava version in trunk

2020-12-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-17440:
-
Attachment: HADOOP-17440.001.patch
Status: Patch Available  (was: Open)

> Downgrade guava version in trunk
> 
>
> Key: HADOOP-17440
> URL: https://issues.apache.org/jira/browse/HADOOP-17440
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-17440.001.patch
>
>
> See details the Jira HADOOP-17439   comments.
> h1.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17440) Downgrade guava version in trunk

2020-12-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-17440:
-
Description: 
See details the Jira HADOOP-17439   comments.
h1.  

 

> Downgrade guava version in trunk
> 
>
> Key: HADOOP-17440
> URL: https://issues.apache.org/jira/browse/HADOOP-17440
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-17440.001.patch
>
>
> See details the Jira HADOOP-17439   comments.
> h1.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17440) Downgrade guava version in trunk

2020-12-18 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-17440:


 Summary: Downgrade guava version in trunk
 Key: HADOOP-17440
 URL: https://issues.apache.org/jira/browse/HADOOP-17440
 Project: Hadoop Common
  Issue Type: Task
Reporter: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251823#comment-17251823
 ] 

Lisheng Sun commented on HADOOP-17439:
--

i very much agree with you.

But our company have dozens of component like hive , which's classpatch is 
blindly taking the entire hadoop-classpath.

The cost of modifying dependent components in this way as you is too high, so I 
want to repair the guava of hadoop.

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251801#comment-17251801
 ] 

Lisheng Sun edited comment on HADOOP-17439 at 12/18/20, 3:03 PM:
-

I don’t understand what the current patch solves. Could you give me an example? 
Thank you [~ayushtkn]

Do we have plans to downgrade guava to 11 or other version in trunk?


was (Author: leosun08):
I don’t understand what the current patch solves. Could you give me an example? 
Thank you [~ayushtkn]

Do we have plans to downgrade guava to version 11 in trunk?

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251801#comment-17251801
 ] 

Lisheng Sun edited comment on HADOOP-17439 at 12/18/20, 3:01 PM:
-

I don’t understand what the current patch solves. Could you give me an example? 
Thank you [~ayushtkn]

Do we have plans to downgrade guava to version 11 in trunk?


was (Author: leosun08):
I don’t understand what the current patch solves. Could you give me an example? 
Thank you [~ayushtkn]

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251801#comment-17251801
 ] 

Lisheng Sun commented on HADOOP-17439:
--

I don’t understand what the current patch solves. Could you give me an example? 
Thank you [~ayushtkn]

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251788#comment-17251788
 ] 

Lisheng Sun commented on HADOOP-17439:
--

The problem I have is that hive's lib includes guava-11.jar, hadoop's lib 
includes guava-27.0-jre.jar and live's classpath includes hadoop' s classpath. 
When the method of guava is used in hive, the method cannot be found due to 
incompatible guava versions.

Other dependent components will also encounter similar problems.

So i think we can remove the orginal guava and keep shaded gauva.

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251783#comment-17251783
 ] 

Lisheng Sun edited comment on HADOOP-17439 at 12/18/20, 2:21 PM:
-

hi [~ayushtkn]

Currently, if one component which relies on hadoop has other version of guava, 
there will still be guava version conflicts, right?


was (Author: leosun08):
Currently, if one component which relies on hadoop has other version of guava, 
there will still be guava version conflicts, right?

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251783#comment-17251783
 ] 

Lisheng Sun commented on HADOOP-17439:
--

Currently, if one component which relies on hadoop has other version of guava, 
there will still be guava version conflicts, right?

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-12-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17251780#comment-17251780
 ] 

Lisheng Sun commented on HADOOP-17288:
--

hi [~ayushtkn]

I found no shaded guava in trunk of my local.  

I don’t know whether  it’s my environment or other problems. 

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17439) No shade guava in trunk

2020-12-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-17439:
-
Summary: No shade guava in trunk  (was: No shade guava in branch)

> No shade guava in trunk
> ---
>
> Key: HADOOP-17439
> URL: https://issues.apache.org/jira/browse/HADOOP-17439
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: image-2020-12-18-22-01-45-424.png
>
>
> !image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17439) No shade guava in branch

2020-12-18 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-17439:


 Summary: No shade guava in branch
 Key: HADOOP-17439
 URL: https://issues.apache.org/jira/browse/HADOOP-17439
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Lisheng Sun
 Attachments: image-2020-12-18-22-01-45-424.png

!image-2020-12-18-22-01-45-424.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17294) Fix typo in comment

2020-10-02 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-17294:
-
Priority: Trivial  (was: Major)

> Fix typo in comment
> ---
>
> Key: HADOOP-17294
> URL: https://issues.apache.org/jira/browse/HADOOP-17294
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ikko Ashimine
>Priority: Trivial
> Attachments: fix-patch.diff
>
>
> existance -> existence



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17155) DF implementation of CachingGetSpaceUsed makes DFS Used size not correct

2020-07-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17165477#comment-17165477
 ] 

Lisheng Sun commented on HADOOP-17155:
--

It indeedly exists and it's recommended to use HDFS-14313.

> DF implementation of CachingGetSpaceUsed makes DFS Used size not correct
> 
>
> Key: HADOOP-17155
> URL: https://issues.apache.org/jira/browse/HADOOP-17155
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: angerszhu
>Priority: Minor
> Attachments: HADOOP-17155.1.patch
>
>
> When   we calculate DN's storage used, we add each Volume's used size 
> together and each volume's size comes from it's BP's size. 
> When we use DF instead of DU, we know that DF check disk space usage (not 
> disk size of a directory). so when check BP dir path,  What you're actually 
> checking is the corresponding disk directory space. 
>  
> When we use this with federation, under each volume  may have more than one 
> BP, each BP return it's corresponding disk directory space. 
>  
> If we have two BP under one volume, we will make DN's storage info's Used 
> size double than real size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17030) Remove unused joda-time

2020-05-05 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-17030:
-
Attachment: HADOOP-17030.001.patch
Status: Patch Available  (was: Open)

> Remove unused joda-time
> ---
>
> Key: HADOOP-17030
> URL: https://issues.apache.org/jira/browse/HADOOP-17030
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-17030.001.patch
>
>
> Joda-time is defined in the hadoop-project/pom.xml but it's not used 
> anywhere. It should be easy to remove it without problems.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-04-25 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17092248#comment-17092248
 ] 

Lisheng Sun commented on HADOOP-16886:
--

I added the branch-3.1 and branch-3.2 patch.

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16886-001.patch, 
> HADOOP-16886-branch-3.1.001.patch, HADOOP-16886-branch-3.2.001.patch, 
> HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-04-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16886:
-
Attachment: HADOOP-16886-branch-3.2.001.patch

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16886-001.patch, 
> HADOOP-16886-branch-3.1.001.patch, HADOOP-16886-branch-3.2.001.patch, 
> HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-04-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16886:
-
Attachment: HADOOP-16886-branch-3.1.001.patch

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16886-001.patch, 
> HADOOP-16886-branch-3.1.001.patch, HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2020-04-21 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16671:
-
Status: Patch Available  (was: Reopened)

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-04-19 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087312#comment-17087312
 ] 

Lisheng Sun commented on HADOOP-16886:
--

I run TestFixKerberosTicketOrder successfully in my local. 

The failed ut is not related to this jira.

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16886-001.patch, HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083353#comment-17083353
 ] 

Lisheng Sun commented on HADOOP-16982:
--

Thanx [~iwasakims] for your report.  I updated the v003 patch.


> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch, HADOOP-16982.002.patch, 
> HADOOP-16982.003.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16982:
-
Attachment: HADOOP-16982.003.patch

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch, HADOOP-16982.002.patch, 
> HADOOP-16982.003.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16982:
-
Attachment: HADOOP-16982.002.patch

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch, HADOOP-16982.002.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17083212#comment-17083212
 ] 

Lisheng Sun commented on HADOOP-16982:
--

Sorry,  I leave for a long time. 
I really did not pay attention to the problem of jar conflicts.
According to the above comment, I will update this patch later.

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17082911#comment-17082911
 ] 

Lisheng Sun edited comment on HADOOP-16982 at 4/14/20, 6:45 AM:


hi [~iwasakims]
Revert this patch, I  run the UT 
TestDatanodeHttpXFrame#testDataNodeXFrameOptionsEnabled failed in my local.  
this failed ut is unrelated to this patch.
I think we should create new jira to do this problem.


was (Author: leosun08):
hi [~iwasakims]
Revert this patch, I  run the UT 
TestDatanodeHttpXFrame#testDataNodeXFrameOptionsEnabled failed in my local.  
this failed ut is unrelated to this patch.

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17082911#comment-17082911
 ] 

Lisheng Sun commented on HADOOP-16982:
--

[~iwasakims]
Revert this patch, I  run the UT 
TestDatanodeHttpXFrame#testDataNodeXFrameOptionsEnabled failed in my local.  
this failed ut is unrelated to this patch.

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-14 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17082911#comment-17082911
 ] 

Lisheng Sun edited comment on HADOOP-16982 at 4/14/20, 6:42 AM:


hi [~iwasakims]
Revert this patch, I  run the UT 
TestDatanodeHttpXFrame#testDataNodeXFrameOptionsEnabled failed in my local.  
this failed ut is unrelated to this patch.


was (Author: leosun08):
[~iwasakims]
Revert this patch, I  run the UT 
TestDatanodeHttpXFrame#testDataNodeXFrameOptionsEnabled failed in my local.  
this failed ut is unrelated to this patch.

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-13 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16982:
-
Status: Patch Available  (was: Open)

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-04-13 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17082793#comment-17082793
 ] 

Lisheng Sun commented on HADOOP-16886:
--

[~weichiu] Could you help reivew it? Thank you.

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16886-001.patch, HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16982) Update Netty to 4.1.48.Final

2020-04-13 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16982:
-
Attachment: HADOOP-16982.001.patch

> Update Netty to 4.1.48.Final
> 
>
> Key: HADOOP-16982
> URL: https://issues.apache.org/jira/browse/HADOOP-16982
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.0
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
> Fix For: 3.3.0
>
> Attachments: HADOOP-16982.001.patch
>
>
> We are currently on Netty 4.1.45.Final. We should update to the latest 
> 4.1.48.Final



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-16882.branch-2.10.v1.patch

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.10.v1.patch, 
> HADOOP-16882.branch-2.9.v1.patch, HADOOP-16882.branch-3.1.v1.patch, 
> HADOOP-16882.branch-3.1.v2.patch, HADOOP-16882.branch-3.1.v3.patch, 
> HADOOP-16882.branch-3.1.v4.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-16882.branch-3.1.v4.patch

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch, HADOOP-16882.branch-3.1.v2.patch, 
> HADOOP-16882.branch-3.1.v3.patch, HADOOP-16882.branch-3.1.v4.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HADOOP-16882:


Assignee: Lisheng Sun

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch, HADOOP-16882.branch-3.1.v2.patch, 
> HADOOP-16882.branch-3.1.v3.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-16882.branch-3.1.v3.patch

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch, HADOOP-16882.branch-3.1.v2.patch, 
> HADOOP-16882.branch-3.1.v3.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HADOOP-16886:


Assignee: Lisheng Sun

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16886-001.patch, HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16886:
-
Attachment: HADOOP-16886.002.patch

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.0, 3.0.4, 3.1.2
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-16886-001.patch, HADOOP-16886.002.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-16882.branch-3.1.v2.patch

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch, HADOOP-16882.branch-3.1.v2.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046061#comment-17046061
 ] 

Lisheng Sun edited comment on HADOOP-16882 at 2/27/20 2:16 AM:
---


{code:java}
 2.9.10
2.9.10.2
{code}
the  trunk branch version as above. so i think the branch-3.1 and branch-2.9 
should be the same to trunk same.
i upgraded the jackson2-databind to 2.9.10.2 and jackson2 to  2.9.10  in the 
branch-3.1 and uploaded the branch-3.1.v2 patch.


was (Author: leosun08):
i upgraded the jackson-databind and jackson2  version to 2.9.10.2  to the 
branch-3.1 and uploaded the branch-3.1.v2 patch.

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: (was: HADOOP-16882.branch-3.1.v2.patch)

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17046061#comment-17046061
 ] 

Lisheng Sun commented on HADOOP-16882:
--

i upgraded the jackson-databind and jackson2  version to 2.9.10.2  to the 
branch-3.1 and uploaded the branch-3.1.v2 patch.

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch, HADOOP-16882.branch-3.1.v2.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-16882.branch-3.1.v2.patch

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch, HADOOP-16882.branch-3.1.v2.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10

2020-02-26 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Summary: Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10  
(was: Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10)

> Update jackson-databind to 2.9.10.2 in branch-3.1, branch-2.10
> --
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml

2020-02-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16886:
-
Attachment: HADOOP-16886-001.patch
Status: Patch Available  (was: Open)

> Add hadoop.http.idle_timeout.ms to core-default.xml
> ---
>
> Key: HADOOP-16886
> URL: https://issues.apache.org/jira/browse/HADOOP-16886
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.2, 3.2.0, 3.0.4
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-16886-001.patch
>
>
> HADOOP-15696 made the http server connection idle time configurable  
> (hadoop.http.idle_timeout.ms).
> This configuration key is added to kms-default.xml and httpfs-default.xml but 
> we missed it in core-default.xml. We should add it there because NNs/JNs/DNs 
> also use it too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10

2020-02-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-16882.branch-3.1.v1.patch

> Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
> 
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10

2020-02-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-16882.branch-2.9.v1.patch

> Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
> 
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-16882.branch-2.9.v1.patch, 
> HADOOP-16882.branch-3.1.v1.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10

2020-02-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: (was: HADOOP-1688.branch-3.1.v2.patch)

> Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
> 
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16882) Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10

2020-02-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16882:
-
Attachment: HADOOP-1688.branch-3.1.v2.patch
Status: Patch Available  (was: Open)

> Update jackson-databind to 2.10.2 in branch-3.1, branch-2.10
> 
>
> Key: HADOOP-16882
> URL: https://issues.apache.org/jira/browse/HADOOP-16882
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: release-blocker
> Attachments: HADOOP-1688.branch-3.1.v2.patch
>
>
> We updated jackson-databind multiple times but those changes only made into 
> trunk and branch-3.2.
> Unless the dependency update is backward incompatible (which is not in this 
> case), we should update them in all active branches



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16793) Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-19 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17019175#comment-17019175
 ] 

Lisheng Sun commented on HADOOP-16793:
--

hi [~elgoiri] 

Should we commit this patch to trunk ? Thank you.

> Redefine log level when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> --
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch, 
> HADOOP-16793.003.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16793) Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-15 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17016570#comment-17016570
 ] 

Lisheng Sun commented on HADOOP-16793:
--

[~elgoiri]

i fixed the checkstyle and uploaded the v003 patch.

> Redefine log level when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> --
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch, 
> HADOOP-16793.003.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-15 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Attachment: HADOOP-16793.003.patch

> Redefine log level when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> --
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch, 
> HADOOP-16793.003.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-15 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Attachment: (was: HADOOP-16793.003.patch)

> Redefine log level when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> --
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch, 
> HADOOP-16793.003.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16793) Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-15 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HADOOP-16793:


Assignee: Lisheng Sun

> Redefine log level when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> --
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch, 
> HADOOP-16793.003.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-15 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Attachment: HADOOP-16793.003.patch

> Redefine log level when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> --
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch, 
> HADOOP-16793.003.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Redefine log level when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-14 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Summary: Redefine log level when ipc connection interrupted in 
Client#handleSaslConnectionFailure()  (was: Remove WARN log when ipc connection 
interrupted in Client#handleSaslConnectionFailure())

> Redefine log level when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> --
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-14 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17015542#comment-17015542
 ] 

Lisheng Sun commented on HADOOP-16793:
--

Thanks [~elgoiri] for your suggestion. 

i updated the patch and uploaded the v002 patch.

> Remove WARN log when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> ---
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-14 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Attachment: HADOOP-16793.002.patch

> Remove WARN log when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> ---
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch, HADOOP-16793.002.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-12 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17013988#comment-17013988
 ] 

Lisheng Sun commented on HADOOP-16793:
--

hi [~ayushtkn] [~elgoiri]  Could you help reivew this patch? Thank you.

> Remove WARN log when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> ---
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-08 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Status: Patch Available  (was: Open)

> Remove WARN log when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> ---
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-07 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Attachment: HADOOP-16793.001.patch

> Remove WARN log when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> ---
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16793.001.patch
>
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-07 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16793:
-
Description: 
log info:
{code:java}
// Some comments here
2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
encountered while connecting to the server : java.io.InterruptedIOException: 
Interrupted while waiting for IO on channel 
java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout left
{code}

With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
all namenodes. After one request return successfully,  all other requests will 
be interrupted. It's not a big problem and should not print a warning log.
{code:java}
private synchronized void handleSaslConnectionFailure(

LOG.warn("Exception encountered while connecting to "
+ "the server : " + ex);
}


{code}
 

  was:
{code:java}
private synchronized void handleSaslConnectionFailure(

LOG.warn("Exception encountered while connecting to "
+ "the server : " + ex);
}


{code}
With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
all namenodes. After one request return successfully,  all other requests will 
be interrupted. It's not a big problem and should not print a warning log.


> Remove WARN log when ipc connection interrupted in 
> Client#handleSaslConnectionFailure()
> ---
>
> Key: HADOOP-16793
> URL: https://issues.apache.org/jira/browse/HADOOP-16793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Priority: Minor
>
> log info:
> {code:java}
> // Some comments here
> 2020-01-07,15:01:17,816 WARN org.apache.hadoop.ipc.Client: Exception 
> encountered while connecting to the server : java.io.InterruptedIOException: 
> Interrupted while waiting for IO on channel 
> java.nio.channels.SocketChannel[connected local=/10.75.13.227:50415 
> remote=mb2-hadoop-prc-ct06.awsind/10.75.15.99:11230]. 6 millis timeout 
> left
> {code}
> With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
> all namenodes. After one request return successfully,  all other requests 
> will be interrupted. It's not a big problem and should not print a warning 
> log.
> {code:java}
> private synchronized void handleSaslConnectionFailure(
> 
> LOG.warn("Exception encountered while connecting to "
> + "the server : " + ex);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16793) Remove WARN log when ipc connection interrupted in Client#handleSaslConnectionFailure()

2020-01-07 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16793:


 Summary: Remove WARN log when ipc connection interrupted in 
Client#handleSaslConnectionFailure()
 Key: HADOOP-16793
 URL: https://issues.apache.org/jira/browse/HADOOP-16793
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Lisheng Sun


{code:java}
private synchronized void handleSaslConnectionFailure(

LOG.warn("Exception encountered while connecting to "
+ "the server : " + ex);
}


{code}
With RequestHedgingProxyProvider, one rpc call will send multiple requests to 
all namenodes. After one request return successfully,  all other requests will 
be interrupted. It's not a big problem and should not print a warning log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16720) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun resolved HADOOP-16720.
--
Resolution: Duplicate

> Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
> 
>
> Key: HADOOP-16720
> URL: https://issues.apache.org/jira/browse/HADOOP-16720
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Priority: Major
>
> when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
> need replica. 
> in current code if use continue, there is one more invalid judgment (priority 
> ==QUEUE_WITH_CORRUPT_BLOCKS).
> i think it should use break instread of continue.
> {code:java}
>  */
> synchronized List> chooseLowRedundancyBlocks(
> int blocksToProcess) {
>   final List> blocksToReconstruct = new ArrayList<>(LEVEL);
>   int count = 0;
>   int priority = 0;
>   for (; count < blocksToProcess && priority < LEVEL; priority++) {
> if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
>   // do not choose corrupted blocks.
>   continue;
> }
> ...
>
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16720) Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()

2019-11-19 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16720:


 Summary: Optimize LowRedundancyBlocks#chooseLowRedundancyBlocks()
 Key: HADOOP-16720
 URL: https://issues.apache.org/jira/browse/HADOOP-16720
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Lisheng Sun


when priority=QUEUE_WITH_CORRUPT_BLOCKS, it mean no block in needreplication 
need replica. 

in current code if use continue, there is one more invalid judgment (priority 
==QUEUE_WITH_CORRUPT_BLOCKS).

i think it should use break instread of continue.
{code:java}
 */
synchronized List> chooseLowRedundancyBlocks(
int blocksToProcess) {
  final List> blocksToReconstruct = new ArrayList<>(LEVEL);

  int count = 0;
  int priority = 0;
  for (; count < blocksToProcess && priority < LEVEL; priority++) {
if (priority == QUEUE_WITH_CORRUPT_BLOCKS) {
  // do not choose corrupted blocks.
  continue;
}
...
   
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-29 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16961958#comment-16961958
 ] 

Lisheng Sun commented on HADOOP-16671:
--

Thank [~hexiaoqiao] for your comments.

it is necessary to keep strict args checks consider InnerNodeImpl#getLeaf is 
one public method.

And since InnerNodeImpl#getLeaf is one public method,
{code:java}
if (excludedIndex != -1 && leafIndex >= 0) {
  // excluded node is one of the children so adjust the leaf index
  leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
}
{code}
excludedIndex could be equal to children.size(),  when leafIndex= 
excludedIndex, leafIndex is out of index.

This situation can be handled specially, leafIndex= excludedIndex- 1

Please correct me if i was wrong. Thank you.

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16675) Upgrade jackson-databind to 2.9.10.1

2019-10-29 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16675:
-
Attachment: HADOOP-16675.001.patch
Status: Patch Available  (was: Open)

> Upgrade jackson-databind to 2.9.10.1
> 
>
> Key: HADOOP-16675
> URL: https://issues.apache.org/jira/browse/HADOOP-16675
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
> Attachments: HADOOP-16675.001.patch
>
>
> Several net new CVEs were raised against jackson-databind 2.9.10.
> CVE-2019-16942
> CVE-2019-16943
> 2.9.10.1 is released, which I believe addresses these two CVEs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-27 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960550#comment-16960550
 ] 

Lisheng Sun commented on HADOOP-16671:
--

[~ayushtkn] 

I verified as the current logic, leafIndex incoming by 
NetworkTopology#chooseRandom must be vailid(0=this.getNumOfChildren()) {
  return null;
}
{code}
i think they remove this range check. Thank you.

 

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reopened HADOOP-16671:
--

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-27 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16671:
-
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-26 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960297#comment-16960297
 ] 

Lisheng Sun commented on HADOOP-16671:
--

hi [~weichiu] [~ayushtkn] [~hexiaoqiao] [~elgoiri] Could you have time to 
review this patch? Thank you.

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reassigned HADOOP-16671:


Assignee: Lisheng Sun

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16671:
-
Attachment: HADOOP-16671.001.patch
Status: Patch Available  (was: Open)

> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16671.001.patch
>
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-25 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16671:
-
Description: 
{code:java}
@Override
public Node getLeaf(int leafIndex, Node excludedNode) {
  int count=0;
  // check if the excluded node a leaf
  boolean isLeaf = !(excludedNode instanceof InnerNode);
  // calculate the total number of excluded leaf nodes
  int numOfExcludedLeaves =
  isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
  if (isLeafParent()) { // children are leaves
if (isLeaf) { // excluded node is a leaf node
  if (excludedNode != null &&
  childrenMap.containsKey(excludedNode.getName())) {
int excludedIndex = children.indexOf(excludedNode);
if (excludedIndex != -1 && leafIndex >= 0) {
  // excluded node is one of the children so adjust the leaf index
  leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
}
  }
}
// range check
if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
  return null;
}
return children.get(leafIndex);
  } else {
{code}
the code InnerNodeImpl#getLeaf() as above

i think it has two problems:

1.if childrenMap.containsKey(excludedNode.getName()) return true, 
children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) is 
it necessary?

2. if excludedindex = children.size() -1

as current code:

leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;

leafIndex will be out of index and return null. Actually there are nodes that 
can be returned.

i think it should add the judgement excludedIndex == children.size() -1

  was:
{code:java}
@Override
public Node getLeaf(int leafIndex, Node excludedNode) {
  int count=0;
  // check if the excluded node a leaf
  boolean isLeaf = !(excludedNode instanceof InnerNode);
  // calculate the total number of excluded leaf nodes
  int numOfExcludedLeaves =
  isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
  if (isLeafParent()) { // children are leaves
if (isLeaf) { // excluded node is a leaf node
  if (excludedNode != null &&
  childrenMap.containsKey(excludedNode.getName())) {
int excludedIndex = children.indexOf(excludedNode);
if (excludedIndex != -1 && leafIndex >= 0) {
  // excluded node is one of the children so adjust the leaf index
  leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
}
  }
}
// range check
if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
  return null;
}
return children.get(leafIndex);
  } else {
{code}


> Optimize InnerNodeImpl#getLeaf
> --
>
> Key: HADOOP-16671
> URL: https://issues.apache.org/jira/browse/HADOOP-16671
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Priority: Major
>
> {code:java}
> @Override
> public Node getLeaf(int leafIndex, Node excludedNode) {
>   int count=0;
>   // check if the excluded node a leaf
>   boolean isLeaf = !(excludedNode instanceof InnerNode);
>   // calculate the total number of excluded leaf nodes
>   int numOfExcludedLeaves =
>   isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
>   if (isLeafParent()) { // children are leaves
> if (isLeaf) { // excluded node is a leaf node
>   if (excludedNode != null &&
>   childrenMap.containsKey(excludedNode.getName())) {
> int excludedIndex = children.indexOf(excludedNode);
> if (excludedIndex != -1 && leafIndex >= 0) {
>   // excluded node is one of the children so adjust the leaf index
>   leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> }
>   }
> }
> // range check
> if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
>   return null;
> }
> return children.get(leafIndex);
>   } else {
> {code}
> the code InnerNodeImpl#getLeaf() as above
> i think it has two problems:
> 1.if childrenMap.containsKey(excludedNode.getName()) return true, 
> children.indexOf(excludedNode) must return > -1, so if (excludedIndex != -1) 
> is it necessary?
> 2. if excludedindex = children.size() -1
> as current code:
> leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
> leafIndex will be out of index and return null. Actually there are nodes that 
> can be returned.
> i think it should add the judgement excludedIndex == children.size() -1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16671) Optimize InnerNodeImpl#getLeaf

2019-10-25 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16671:


 Summary: Optimize InnerNodeImpl#getLeaf
 Key: HADOOP-16671
 URL: https://issues.apache.org/jira/browse/HADOOP-16671
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Lisheng Sun


{code:java}
@Override
public Node getLeaf(int leafIndex, Node excludedNode) {
  int count=0;
  // check if the excluded node a leaf
  boolean isLeaf = !(excludedNode instanceof InnerNode);
  // calculate the total number of excluded leaf nodes
  int numOfExcludedLeaves =
  isLeaf ? 1 : ((InnerNode)excludedNode).getNumOfLeaves();
  if (isLeafParent()) { // children are leaves
if (isLeaf) { // excluded node is a leaf node
  if (excludedNode != null &&
  childrenMap.containsKey(excludedNode.getName())) {
int excludedIndex = children.indexOf(excludedNode);
if (excludedIndex != -1 && leafIndex >= 0) {
  // excluded node is one of the children so adjust the leaf index
  leafIndex = leafIndex>=excludedIndex ? leafIndex+1 : leafIndex;
}
  }
}
// range check
if (leafIndex<0 || leafIndex>=this.getNumOfChildren()) {
  return null;
}
return children.get(leafIndex);
  } else {
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2019-10-18 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16955055#comment-16955055
 ] 

Lisheng Sun commented on HADOOP-8159:
-

I open Jira  HADOOP-16662  to tackle this. Thank you. [~elgoiri] 

> NetworkTopology: getLeaf should check for invalid topologies
> 
>
> Key: HADOOP-8159
> URL: https://issues.apache.org/jira/browse/HADOOP-8159
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 1.1.0, 2.0.0-alpha
>
> Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
> HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
> HADOOP-8159.008.patch, HADOOP-8159.009.patch
>
>
> Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
> InnerNode object itself. This results in us getting ClassCastException 
> sometimes when the network topology is invalid. We should have a less 
> confusing exception message for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Status: Patch Available  (was: Open)

> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Attachment: HADOOP-16662.001.patch

> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
> Attachments: HADOOP-16662.001.patch
>
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
>   netlock.writeLock().lock();
>   try {
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
>   LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
>   NodeBase.getPath(node), newDepth, this);
>   throw new InvalidTopologyException("Failed to add " + 
> NodeBase.getPath(node) +
>   ": You cannot have a rack and a non-rack node at the same " +
>   "level of the network topology.");
> }
> Node rack = getNodeForNetworkLocation(node);
> if (rack != null && !(rack instanceof InnerNode)) {
>   throw new IllegalArgumentException("Unexpected data node " 
>  + node.toString() 
>  + " at an illegal network location");
> }
> if (clusterMap.add(node)) {
>   LOG.info("Adding a new node: "+NodeBase.getPath(node));
>   if (rack == null) {
> incrementRacks();
>   }
>   if (!(node instanceof InnerNode)) {
> if (depthOfAllLeaves == -1) {
>   depthOfAllLeaves = node.getLevel();
> }
>   }
> }
> LOG.debug("NetworkTopology became:\n{}", this);
>   } finally {
> netlock.writeLock().unlock();
>   }
> }
> {code}
> {code:java}
> if( node instanceof InnerNode ) {
>   throw new IllegalArgumentException(
> "Not allow to add an inner node: "+NodeBase.getPath(node));
> }
> if (!(node instanceof InnerNode)) is invalid,since there is already a 
> judgement before as follow:
> if (!(node instanceof InnerNode)) {
>   if (depthOfAllLeaves == -1) {
> depthOfAllLeaves = node.getLevel();
>   }
> }{code}
> so i think if (!(node instanceof InnerNode)) should be removed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Description: 
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}{code}
so i think if (!(node instanceof InnerNode)) should be removed.

  was:
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}{code}

so i think if (!(node instanceof InnerNode)) should be removed.


> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng 

[jira] [Updated] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16662:
-
Description: 
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}{code}

so i think if (!(node instanceof InnerNode)) should be removed.

  was:
The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}


> Remove invalid judgment in NetworkTopology#add()
> 
>
> Key: HADOOP-16662
> URL: https://issues.apache.org/jira/browse/HADOOP-16662
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Minor
>
> The method of NetworkTopology#add() as follow:
> {code:java}
> /** Add a leaf node
>  * Update node counter  rack counter if necessary
>  * @param node node to be added; can be null
>  * @exception IllegalArgumentException if add a node to a leave 
>or node to be added is not a leaf
>  */
> public void add(Node node) {
>   if (node==null) return;
>   int 

[jira] [Created] (HADOOP-16662) Remove invalid judgment in NetworkTopology#add()

2019-10-18 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16662:


 Summary: Remove invalid judgment in NetworkTopology#add()
 Key: HADOOP-16662
 URL: https://issues.apache.org/jira/browse/HADOOP-16662
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Lisheng Sun
Assignee: Lisheng Sun


The method of NetworkTopology#add() as follow:
{code:java}
/** Add a leaf node
 * Update node counter  rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
if ((depthOfAllLeaves != -1) && (depthOfAllLeaves != newDepth)) {
  LOG.error("Error: can't add leaf node {} at depth {} to topology:{}\n",
  NodeBase.getPath(node), newDepth, this);
  throw new InvalidTopologyException("Failed to add " + 
NodeBase.getPath(node) +
  ": You cannot have a rack and a non-rack node at the same " +
  "level of the network topology.");
}
Node rack = getNodeForNetworkLocation(node);
if (rack != null && !(rack instanceof InnerNode)) {
  throw new IllegalArgumentException("Unexpected data node " 
 + node.toString() 
 + " at an illegal network location");
}
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2019-10-17 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954214#comment-16954214
 ] 

Lisheng Sun edited comment on HADOOP-8159 at 10/18/19 2:47 AM:
---

hi [~cmccabe] [~weichiu] [~elgoiri] [~ayushtkn]

the method of NetworkTopology#add

 
{code:java}
/** Add a leaf node
 * Update node counter & rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
  
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
 if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
  }
 

{code}
{code:java}
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}
{code}
so i think if (!(node instanceof InnerNode)) should be removed.  Please correct 
me if i was wrong. Thank you.

 


was (Author: leosun08):
hi [~cmccabe] [~weichiu] [~elgoiri] [~ayushtkn]

the method of NetworkTopology#add

 
{code:java}
/** Add a leaf node
 * Update node counter & rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
  
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
 if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
  }
 

{code}
so i think if (!(node instanceof InnerNode)) should be removed.  Please correct 
me if i was wrong. Thank you.

 

> NetworkTopology: getLeaf should check for invalid topologies
> 
>
> Key: HADOOP-8159
> URL: https://issues.apache.org/jira/browse/HADOOP-8159
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 1.1.0, 2.0.0-alpha
>
> Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
> HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
> HADOOP-8159.008.patch, HADOOP-8159.009.patch
>
>
> Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
> InnerNode object itself. This results in us getting ClassCastException 
> sometimes when the network topology is invalid. We should have a less 
> confusing exception message for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2019-10-17 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954214#comment-16954214
 ] 

Lisheng Sun commented on HADOOP-8159:
-

hi [~cmccabe] [~weichiu] [~elgoiri] [~ayushtkn]

the method of NetworkTopology#add

 
{code:java}
/** Add a leaf node
 * Update node counter & rack counter if necessary
 * @param node node to be added; can be null
 * @exception IllegalArgumentException if add a node to a leave 
   or node to be added is not a leaf
 */
public void add(Node node) {
  if (node==null) return;
  int newDepth = NodeBase.locationToDepth(node.getNetworkLocation()) + 1;
  netlock.writeLock().lock();
  try {
if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
}
  
if (clusterMap.add(node)) {
  LOG.info("Adding a new node: "+NodeBase.getPath(node));
  if (rack == null) {
incrementRacks();
  }
  if (!(node instanceof InnerNode)) {
if (depthOfAllLeaves == -1) {
  depthOfAllLeaves = node.getLevel();
}
  }
}
LOG.debug("NetworkTopology became:\n{}", this);
  } finally {
netlock.writeLock().unlock();
  }
}

if (!(node instanceof InnerNode)) {
  if (depthOfAllLeaves == -1) {
depthOfAllLeaves = node.getLevel();
  }
}

if (!(node instanceof InnerNode)) is invalid,since there is already a judgement 
before as follow:
 if( node instanceof InnerNode ) {
  throw new IllegalArgumentException(
"Not allow to add an inner node: "+NodeBase.getPath(node));
  }
 

{code}
so i think if (!(node instanceof InnerNode)) should be removed.  Please correct 
me if i was wrong. Thank you.

 

> NetworkTopology: getLeaf should check for invalid topologies
> 
>
> Key: HADOOP-8159
> URL: https://issues.apache.org/jira/browse/HADOOP-8159
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
> Fix For: 1.1.0, 2.0.0-alpha
>
> Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
> HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
> HADOOP-8159.008.patch, HADOOP-8159.009.patch
>
>
> Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
> InnerNode object itself. This results in us getting ClassCastException 
> sometimes when the network topology is invalid. We should have a less 
> confusing exception message for this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16643:
-
Status: Patch Available  (was: Open)

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-08 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16643:
-
Attachment: HADOOP-16643.001.patch

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Priority: Major
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16466) Clean up the Assert usage in tests

2019-10-03 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16466:
-
Attachment: HADOOP-16466.001.patch
Status: Patch Available  (was: Open)

> Clean up the Assert usage in tests
> --
>
> Key: HADOOP-16466
> URL: https://issues.apache.org/jira/browse/HADOOP-16466
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Reporter: Fengnan Li
>Assignee: Fengnan Li
>Priority: Major
> Attachments: HADOOP-16466.001.patch
>
>
> This tickets started with https://issues.apache.org/jira/browse/HDFS-14449 
> and we would like to clean up all of the Assert usage in tests to make the 
> repo cleaner. This mainly is to make use static imports for the Assert 
> functions and use function call without the *Assert.* explicitly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16600:
-
Attachment: HADOOP-16600.branch-3.1.v1.patch

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16600.branch-3.1.v1.patch
>
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> as follow code:
> {code:java}
> InitiateMultipartUploadRequest req = invocation.getArgumentAt(
> 0, InitiateMultipartUploadRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16600:
-
Description: 
details see HADOOP-15398
Problem: hadoop trunk compilation is failing
Root Cause:
compilation error is coming from 
org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
"The method getArgumentAt(int, Class) is undefined for the 
type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class) method is available only from version 2.0.0-beta


{code:java}
InitiateMultipartUploadRequest req = invocation.getArgumentAt(
0, InitiateMultipartUploadRequest.class);
{code}


  was:
details see HADOOP-15398
Problem: hadoop trunk compilation is failing
Root Cause:
compilation error is coming from 
org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
"The method getArgumentAt(int, Class) is undefined for the 
type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class) method is available only from version 2.0.0-beta


> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> {code:java}
> InitiateMultipartUploadRequest req = invocation.getArgumentAt(
> 0, InitiateMultipartUploadRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16600:
-
Description: 
details see HADOOP-15398
Problem: hadoop trunk compilation is failing
Root Cause:
compilation error is coming from 
org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
"The method getArgumentAt(int, Class) is undefined for the 
type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class) method is available only from version 2.0.0-beta

as follow code:
{code:java}
InitiateMultipartUploadRequest req = invocation.getArgumentAt(
0, InitiateMultipartUploadRequest.class);
{code}


  was:
details see HADOOP-15398
Problem: hadoop trunk compilation is failing
Root Cause:
compilation error is coming from 
org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
"The method getArgumentAt(int, Class) is undefined for the 
type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class) method is available only from version 2.0.0-beta


{code:java}
InitiateMultipartUploadRequest req = invocation.getArgumentAt(
0, InitiateMultipartUploadRequest.class);
{code}



> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta
> as follow code:
> {code:java}
> InitiateMultipartUploadRequest req = invocation.getArgumentAt(
> 0, InitiateMultipartUploadRequest.class);
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun resolved HADOOP-16600.
--
Resolution: Duplicate

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun reopened HADOOP-16600:
--

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16600:
-
Description: 
details see HADOOP-15398
Problem: hadoop trunk compilation is failing
Root Cause:
compilation error is coming from 
org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
"The method getArgumentAt(int, Class) is undefined for the 
type InvocationOnMock".

StagingTestBase is using getArgumentAt(int, Class) method 
which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
Class) method is available only from version 2.0.0-beta

> StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1
> -
>
> Key: HADOOP-16600
> URL: https://issues.apache.org/jira/browse/HADOOP-16600
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.1.0, 3.1.1, 3.1.2
>Reporter: Lisheng Sun
>Priority: Major
>
> details see HADOOP-15398
> Problem: hadoop trunk compilation is failing
> Root Cause:
> compilation error is coming from 
> org.apache.hadoop.fs.s3a.commit.staging.StagingTestBase. Compilation error is 
> "The method getArgumentAt(int, Class) is undefined for the 
> type InvocationOnMock".
> StagingTestBase is using getArgumentAt(int, Class) method 
> which is not available in mockito-all 1.8.5 version. getArgumentAt(int, 
> Class) method is available only from version 2.0.0-beta



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16600) StagingTestBase uses methods not available in Mockito 1.8.5 in branch-3.1

2019-09-24 Thread Lisheng Sun (Jira)
Lisheng Sun created HADOOP-16600:


 Summary: StagingTestBase uses methods not available in Mockito 
1.8.5 in branch-3.1
 Key: HADOOP-16600
 URL: https://issues.apache.org/jira/browse/HADOOP-16600
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.1.2, 3.1.1, 3.1.0
Reporter: Lisheng Sun






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16553) ipc.client.connect.max.retries.on.timeouts default value is too many

2019-09-21 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16935219#comment-16935219
 ] 

Lisheng Sun commented on HADOOP-16553:
--

Thanks [~elgoiri] for your comments.

I agee people have many questions about times. 

In fact, I also have a question why set the default to 45 times at the 
beginning.

> ipc.client.connect.max.retries.on.timeouts default value is too many
> 
>
> Key: HADOOP-16553
> URL: https://issues.apache.org/jira/browse/HADOOP-16553
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16553.001.patch, HADOOP-16553.002.patch
>
>
> Current ipc connection retry default times is 45 when socket timeout.  Socket 
> timeout default is 20s.
> So if network packet loss on received machine and don't reponse to 
> client,client  need wait at most 15 mins.
> I think ipc connection retry default times should decreate.
> {code:java}
> public static final String  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
>   "ipc.client.connect.max.retries.on.timeouts";
> /** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
> public static final int  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;
> public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
>   "ipc.client.connect.timeout";
> /** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
> public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16553) ipc.client.connect.max.retries.on.timeouts default value is too many

2019-09-19 Thread Lisheng Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933111#comment-16933111
 ] 

Lisheng Sun commented on HADOOP-16553:
--

hi [~elgoiri], [~ayushtkn] [~xkrogen] [~jojochuang]  Could you have time to 
help review this patch? Thank you.

> ipc.client.connect.max.retries.on.timeouts default value is too many
> 
>
> Key: HADOOP-16553
> URL: https://issues.apache.org/jira/browse/HADOOP-16553
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16553.001.patch, HADOOP-16553.002.patch
>
>
> Current ipc connection retry default times is 45 when socket timeout.  Socket 
> timeout default is 20s.
> So if network packet loss on received machine and don't reponse to 
> client,client  need wait at most 15 mins.
> I think ipc connection retry default times should decreate.
> {code:java}
> public static final String  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
>   "ipc.client.connect.max.retries.on.timeouts";
> /** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
> public static final int  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;
> public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
>   "ipc.client.connect.timeout";
> /** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
> public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16553) ipc.client.connect.max.retries.on.timeouts default value is too many

2019-09-19 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16553:
-
Attachment: HADOOP-16553.002.patch

> ipc.client.connect.max.retries.on.timeouts default value is too many
> 
>
> Key: HADOOP-16553
> URL: https://issues.apache.org/jira/browse/HADOOP-16553
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16553.001.patch, HADOOP-16553.002.patch
>
>
> Current ipc connection retry default times is 45 when socket timeout.  Socket 
> timeout default is 20s.
> So if network packet loss on received machine and don't reponse to 
> client,client  need wait at most 15 mins.
> I think ipc connection retry default times should decreate.
> {code:java}
> public static final String  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
>   "ipc.client.connect.max.retries.on.timeouts";
> /** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
> public static final int  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;
> public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
>   "ipc.client.connect.timeout";
> /** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
> public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16553) ipc.client.connect.max.retries.on.timeouts default value is too many

2019-09-15 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16553:
-
Attachment: HADOOP-16553.001.patch
Status: Patch Available  (was: Open)

> ipc.client.connect.max.retries.on.timeouts default value is too many
> 
>
> Key: HADOOP-16553
> URL: https://issues.apache.org/jira/browse/HADOOP-16553
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
> Attachments: HADOOP-16553.001.patch
>
>
> Current ipc connection retry default times is 45 when socket timeout.  Socket 
> timeout default is 20s.
> So if network packet loss on received machine and don't reponse to 
> client,client  need wait at most 15 mins.
> I think ipc connection retry default times should decreate.
> {code:java}
> public static final String  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
>   "ipc.client.connect.max.retries.on.timeouts";
> /** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
> public static final int  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;
> public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
>   "ipc.client.connect.timeout";
> /** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
> public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16553) ipc.client.connect.max.retries.on.timeouts default value is too many

2019-09-07 Thread Lisheng Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lisheng Sun updated HADOOP-16553:
-
Description: 
Current ipc connection retry default times is 45 when socket timeout.  Socket 
timeout default is 20s.
So if network packet loss on received machine and don't reponse to 
client,client  need wait at most 15 mins.
I think ipc connection retry default times should decreate.
{code:java}
public static final String  
IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
  "ipc.client.connect.max.retries.on.timeouts";
/** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
public static final int  
IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;

public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
  "ipc.client.connect.timeout";
/** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
{code}

  was:
Current ipc connection retry default times is 45 when socket timeout.  Socket 
timeout default is 20s.
So if network packet loss on received machine,client  need wait at most 15 mins.
I think ipc connection retry default times should decreate.
{code:java}
public static final String  
IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
  "ipc.client.connect.max.retries.on.timeouts";
/** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
public static final int  
IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;

public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
  "ipc.client.connect.timeout";
/** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
{code}


> ipc.client.connect.max.retries.on.timeouts default value is too many
> 
>
> Key: HADOOP-16553
> URL: https://issues.apache.org/jira/browse/HADOOP-16553
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Lisheng Sun
>Assignee: Lisheng Sun
>Priority: Major
>
> Current ipc connection retry default times is 45 when socket timeout.  Socket 
> timeout default is 20s.
> So if network packet loss on received machine and don't reponse to 
> client,client  need wait at most 15 mins.
> I think ipc connection retry default times should decreate.
> {code:java}
> public static final String  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY =
>   "ipc.client.connect.max.retries.on.timeouts";
> /** Default value for IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_KEY */
> public static final int  
> IPC_CLIENT_CONNECT_MAX_RETRIES_ON_SOCKET_TIMEOUTS_DEFAULT = 45;
> public static final String  IPC_CLIENT_CONNECT_TIMEOUT_KEY =
>   "ipc.client.connect.timeout";
> /** Default value for IPC_CLIENT_CONNECT_TIMEOUT_KEY */
> public static final int IPC_CLIENT_CONNECT_TIMEOUT_DEFAULT = 2; // 20s
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >