[jira] [Commented] (HADOOP-17098) Reduce Guava dependency in Hadoop source code

2021-12-09 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17456529#comment-17456529
 ] 

Ahmed Hussein commented on HADOOP-17098:


Thanks [~ayushtkn],
I completely understand your point.
There is one important factor that is not mentioned here, which is that lots of 
the features became part of the JDK. This is an unnecessary redundancy.
Also, based on what I have seen in the code, usage of Guava library became the 
norm (maybe due to coding style and copy-paste).
For example, I saw hundreds of places that were initializing lists and sets 
through Guava for no apparent reason. 

bq. We would load our new classes now? How much impact the replacement will 
have. Wouldn't this be true for all 3rd Party Libraries 

It is true that we had to add wrappers to match the API. However, those new 
wrappers are using JDK features instead of Guava classes. 
Regarding 3rd party libraries, this would not be tru if the library is adding a 
value to the code. If a library provides an API that is already in the JDK, 
then we should revisit and question the usage of that library.
The problem with Guava usages that are really unnecessary in many places and 
they provided almost nothing after JDK8+.

bq. Now we have implemented these and that too on similar lines, now if there 
is a problem. now we will be also responsible. Along with core hadoop stuff, we 
have to manage this as well.

I do not see that Guava really did any better regarding security. Upgrading 
Guava dependency is always a pain and HAdoop gets stuck with a vulnerable Guava 
release for quite sometime.
We implemented very basic wrappers that call JDK classes (Preconditions, 
Supplier, Predicate..etc). If there is a security issue, then it is most 
probably a JDK related issue.
 
bq. On a lighter note: Does this mean the code we write doesn't need 
performance analysis?

OF course we need to evaluate the code to identify the hot paths. This will 
enable us to improve the execution time and the space usage as needed.
For example, optimizing a loop, pool-allocation, replace lambda, ..etc.
With Guava, it is a different story because it provides entire package. For 
example, we will have: Guava collections which are different than Java 
collections and can give you completely different performance. In addition, we 
will still have the same evaluation of the code structure.
There won't be fine grained control over Guava. 


> Reduce Guava dependency in Hadoop source code
> -
>
> Key: HADOOP-17098
> URL: https://issues.apache.org/jira/browse/HADOOP-17098
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Relying on Guava implementation in Hadoop has been painful due to 
> compatibility and vulnerability issues.
>  Guava updates tend to break/deprecate APIs. This made It hard to maintain 
> backward compatibility within hadoop versions and clients/downstreams.
> With 3.x uses java8+, the java 8 features should preferred to Guava, reducing 
> the footprint, and giving stability to source code.
> This jira should serve as an umbrella toward an incremental effort to reduce 
> the usage of Guava in the source code and to create subtasks to replace Guava 
> classes with Java features.
> Furthermore, it will be good to add a rule in the pre-commit build to warn 
> against introducing a new Guava usage in certain modules.
> Any one willing to take part in this code refactoring has to:
>  # Focus on one module at a time in order to reduce the conflicts and the 
> size of the patch. This will significantly help the reviewers.
>  # Run all the unit tests related to the module being affected by the change. 
> It is critical to verify that any change will not break the unit tests, or 
> cause a stable test case to become flaky.
>  # Merge should be done to the following branches:  trunk, branch-3.3, 
> branch-3.2, branch-3.1
>  
> A list of sub tasks replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> 

[jira] [Resolved] (HADOOP-17970) unguava: remove Preconditions from hdfs-projects module

2021-10-25 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved HADOOP-17970.

Fix Version/s: 3.4.0
   Resolution: Fixed

> unguava: remove Preconditions from hdfs-projects module
> ---
>
> Key: HADOOP-17970
> URL: https://issues.apache.org/jira/browse/HADOOP-17970
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Replace guava Preconditions by internal implementations that rely on java8+ 
> APIs in the hadoop.util for all modules in hadoop-hdfs-project



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17970) unguava: remove Preconditions from hdfs-projects module

2021-10-22 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17970:
---
Parent: HADOOP-17098
Issue Type: Sub-task  (was: Bug)

> unguava: remove Preconditions from hdfs-projects module
> ---
>
> Key: HADOOP-17970
> URL: https://issues.apache.org/jira/browse/HADOOP-17970
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Replace guava Preconditions by internal implementations that rely on java8+ 
> APIs in the hadoop.util for all modules in hadoop-hdfs-project



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17970) unguava: remove Preconditions from hdfs-projects module

2021-10-19 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17970:
---
Summary: unguava: remove Preconditions from hdfs-projects module  (was: 
unguava: remove Preconditions from hdfs-project module)

> unguava: remove Preconditions from hdfs-projects module
> ---
>
> Key: HADOOP-17970
> URL: https://issues.apache.org/jira/browse/HADOOP-17970
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Replace guava Preconditions by internal implementations that rely on java8+ 
> APIs in the hadoop.util for all modules in hadoop-hdfs-project



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17970) unguava: remove Preconditions from hdfs-project module

2021-10-19 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17970 started by Ahmed Hussein.
--
> unguava: remove Preconditions from hdfs-project module
> --
>
> Key: HADOOP-17970
> URL: https://issues.apache.org/jira/browse/HADOOP-17970
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Replace guava Preconditions by internal implementations that rely on java8+ 
> APIs in the hadoop.util for all modules in hadoop-hdfs-project



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17970) unguava: remove Preconditions from hdfs-project module

2021-10-19 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17970:
--

 Summary: unguava: remove Preconditions from hdfs-project module
 Key: HADOOP-17970
 URL: https://issues.apache.org/jira/browse/HADOOP-17970
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Replace guava Preconditions by internal implementations that rely on java8+ 
APIs in the hadoop.util for all modules in hadoop-hdfs-project





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17102) Add checkstyle rule to prevent further usage of Guava classes

2021-10-17 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17102:
---
Resolution: Done
Status: Resolved  (was: Patch Available)

This won't be needed anymore as the banned-imports were added to the maven pom 
file.

> Add checkstyle rule to prevent further usage of Guava classes
> -
>
> Key: HADOOP-17102
> URL: https://issues.apache.org/jira/browse/HADOOP-17102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, precommit
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17102.001.patch, HADOOP-17102.002.patch
>
>
> We should have precommit rules to prevent further usage of Guava classes that 
> are available in Java8+
> A list replacing Guava APIs with java8 features:
> {code:java}
> com.google.common.io.BaseEncoding#base64()java.util.Base64
> com.google.common.io.BaseEncoding#base64Url() java.util.Base64
> com.google.common.base.Joiner.on()
> java.lang.String#join() or 
>   
>java.util.stream.Collectors#joining()
> com.google.common.base.Optional#of()  java.util.Optional#of()
> com.google.common.base.Optional#absent()  
> java.util.Optional#empty()
> com.google.common.base.Optional#fromNullable()
> java.util.Optional#ofNullable()
> com.google.common.base.Optional   
> java.util.Optional
> com.google.common.base.Predicate  
> java.util.function.Predicate
> com.google.common.base.Function   
> java.util.function.Function
> com.google.common.base.Supplier   
> java.util.function.Supplier
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17960) hadoop-auth module cannot import non-guava implementation in hatoop util

2021-10-11 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved HADOOP-17960.

Resolution: Won't Fix

> hadoop-auth module cannot import non-guava implementation in hatoop util
> 
>
> Key: HADOOP-17960
> URL: https://issues.apache.org/jira/browse/HADOOP-17960
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Priority: Major
>
> hadoop-common  provides several util implementations in 
> {{org.apache.hadoop.util.*}}. Since hadoop-common depends on hadoop-auth, all 
> the utility implementations cannot be used within hadoop-auth.
> There are several options:
> * similar to {{hadoop-annotations}} generic and utility implementations such 
> as maps, Strings, Preconditions, ..etc could be moved to a new common-util 
> module that has no dependency on other modules.
> * easier fix is to manually replace the guava calls in hadoop-auth module 
> without importing {{hadoop.util.*}}. Only few calls need to be manually 
> replaced: {{Splitter}}, {{Preconditions.checkNotNull}}, and 
> {{Preconditions.checkArgument}}
> CC: [~vjasani] , [~ste...@apache.org], [~tasanuma]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17123) remove guava Preconditions from Hadoop-common-project modules

2021-10-10 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17123:
---
Summary: remove guava Preconditions from Hadoop-common-project modules  
(was: remove guava Preconditions from Hadoop-common module)

> remove guava Preconditions from Hadoop-common-project modules
> -
>
> Key: HADOOP-17123
> URL: https://issues.apache.org/jira/browse/HADOOP-17123
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Replace guava Preconditions by internal implementations that rely on java8+ 
> APIs in the hadoop



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17960) hadoop-auth module cannot import non-guava implementation in hatoop util

2021-10-10 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17960:
--

 Summary: hadoop-auth module cannot import non-guava implementation 
in hatoop util
 Key: HADOOP-17960
 URL: https://issues.apache.org/jira/browse/HADOOP-17960
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmed Hussein


hadoop-common  provides several util implementations in 
{{org.apache.hadoop.util.*}}. Since hadoop-common depends on hadoop-auth, all 
the utility implementations cannot be used within hadoop-auth.

There are several options:
* similar to {{hadoop-annotations}} generic and utility implementations such as 
maps, Strings, Preconditions, ..etc could be moved to a new common-util module 
that has no dependency on other modules.
* easier fix is to manually replace the guava calls in hadoop-auth module 
without importing {{hadoop.util.*}}. Only few calls need to be manually 
replaced: {{Splitter}}, {{Preconditions.checkNotNull}}, and 
{{Preconditions.checkArgument}}

CC: [~vjasani] , [~ste...@apache.org], [~tasanuma]





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17930) implement non-guava Precondition checkState

2021-10-05 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17930 started by Ahmed Hussein.
--
> implement non-guava Precondition checkState
> ---
>
> Key: HADOOP-17930
> URL: https://issues.apache.org/jira/browse/HADOOP-17930
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to add the implementation {{checkState}} to the existing class 
> {{org.apache.hadoop.util.Preconditions}}
> +The plan is as follows+
>  * implement {{org.apache.hadoop.util.Preconditions.checkState}} with the 
> minimum set of interface used in the current hadoop repo.
>  * we can replace {{guava.Preconditions}} by 
> {{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
> implemented (both this jira and HADOOP-17929 are complete).
>  * We need the change to be easily to be backported in 3.x.
> previous jiras:
>  * HADOOP-17126 was created to implement CheckNotNull.
>  * HADOOP-17929 implementing checkArgument.
> CC: [~ste...@apache.org], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17930) implement non-guava Precondition checkState

2021-09-23 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17419426#comment-17419426
 ] 

Ahmed Hussein commented on HADOOP-17930:


Thanks [~bhalchandrap]  for pointing that out. This is very helpful as we could 
use it as reference.
 My approach so far in those non-guava classes is:
 * minimizing the delta changes by using the same API name. I initially went 
for {{Validate.java}}, but we changed it later to {{Preconditions.java}}.
 * not introducing dependency
 * implement the minimum necessary to cover all the calls through the hadoop 
code.
 * avoid introducing exceptions that could change the original behavior 
(illformatted strings..bla..bla).

 

> implement non-guava Precondition checkState
> ---
>
> Key: HADOOP-17930
> URL: https://issues.apache.org/jira/browse/HADOOP-17930
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to add the implementation {{checkState}} to the existing class 
> {{org.apache.hadoop.util.Preconditions}}
> +The plan is as follows+
>  * implement {{org.apache.hadoop.util.Preconditions.checkState}} with the 
> minimum set of interface used in the current hadoop repo.
>  * we can replace {{guava.Preconditions}} by 
> {{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
> implemented (both this jira and HADOOP-17929 are complete).
>  * We need the change to be easily to be backported in 3.x.
> previous jiras:
>  * HADOOP-17126 was created to implement CheckNotNull.
>  * HADOOP-17929 implementing checkArgument.
> CC: [~ste...@apache.org], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17123) remove guava Preconditions from Hadoop-common module

2021-09-22 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17123:
---
Labels:   (was: pull-request-available)

> remove guava Preconditions from Hadoop-common module
> 
>
> Key: HADOOP-17123
> URL: https://issues.apache.org/jira/browse/HADOOP-17123
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Replace guava Preconditions by internal implementations that rely on java8+ 
> APIs in the hadoop



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17930) implement non-guava Precondition checkState

2021-09-22 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17930:
---
Description: 
As part In order to replace Guava Preconditions, we need to implement our own 
versions of the API.
 This Jira is to add the implementation {{checkState}} to the existing class 
{{org.apache.hadoop.util.Preconditions}}

+The plan is as follows+
 * implement {{org.apache.hadoop.util.Preconditions.checkState}} with the 
minimum set of interface used in the current hadoop repo.
 * we can replace {{guava.Preconditions}} by 
{{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
implemented (both this jira and HADOOP-17929 are complete).
 * We need the change to be easily to be backported in 3.x.

previous jiras:
 * HADOOP-17126 was created to implement CheckNotNull.
 * HADOOP-17929 implementing checkArgument.

CC: [~ste...@apache.org], [~vjasani]

  was:
As part In order to replace Guava Preconditions, we need to implement our own 
versions of the API.
 This Jira is to add the implementation {{checkState}} to the existing class 
{{org.apache.hadoop.util.Preconditions}}

 +The plan is as follows+
 * implement  {{org.apache.hadoop.util.Preconditions.checkState}} with the 
minimum set of interface used in the current hadoop repo.
 * we can replace {{guava.Preconditions}} by 
{{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
implemented (both this jira and HADOOP-17929 are complete).
 * We need the change to be easily to be backported in 3.x.


previous jiras:
* HADOOP-17126 was created to replace CheckNotNull.
* HADOOP-17929 replacing checkArgument.

CC: [~ste...@apache.org], [~vjasani]


> implement non-guava Precondition checkState
> ---
>
> Key: HADOOP-17930
> URL: https://issues.apache.org/jira/browse/HADOOP-17930
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to add the implementation {{checkState}} to the existing class 
> {{org.apache.hadoop.util.Preconditions}}
> +The plan is as follows+
>  * implement {{org.apache.hadoop.util.Preconditions.checkState}} with the 
> minimum set of interface used in the current hadoop repo.
>  * we can replace {{guava.Preconditions}} by 
> {{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
> implemented (both this jira and HADOOP-17929 are complete).
>  * We need the change to be easily to be backported in 3.x.
> previous jiras:
>  * HADOOP-17126 was created to implement CheckNotNull.
>  * HADOOP-17929 implementing checkArgument.
> CC: [~ste...@apache.org], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17929) implement non-guava Precondition checkArgument

2021-09-22 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17929 started by Ahmed Hussein.
--
> implement non-guava Precondition checkArgument
> --
>
> Key: HADOOP-17929
> URL: https://issues.apache.org/jira/browse/HADOOP-17929
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to add the implementation {{checkArgument}} to the existing 
> class {{org.apache.hadoop.util.Preconditions}}
> +The plan is as follows+
>  * implement {{org.apache.hadoop.util.Preconditions.checkArgument}} with the 
> minimum set of interface used in the current hadoop repo.
>  * we can replace {{guava.Preconditions}} by 
> {{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
> implemented.
>  * We need the change to be easily to be backported in 3.x.
> A previous jira HADOOP-17126 was created to replace CheckNotNull. 
> HADOOP-17930 is created to implement checkState.
> CC: [~ste...@apache.org], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17929) implement non-guava Precondition checkArgument

2021-09-22 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17929:
---
Description: 
As part In order to replace Guava Preconditions, we need to implement our own 
versions of the API.
 This Jira is to add the implementation {{checkArgument}} to the existing class 
{{org.apache.hadoop.util.Preconditions}}

+The plan is as follows+
 * implement {{org.apache.hadoop.util.Preconditions.checkArgument}} with the 
minimum set of interface used in the current hadoop repo.
 * we can replace {{guava.Preconditions}} by 
{{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
implemented.
 * We need the change to be easily to be backported in 3.x.

A previous jira HADOOP-17126 was created to replace CheckNotNull. HADOOP-17930 
is created to implement checkState.

CC: [~ste...@apache.org], [~vjasani]

  was:
As part In order to replace Guava Preconditions, we need to implement our own 
versions of the API.
 This Jira is to add the implementation {{checkArgument}} to the existing class 
{{org.apache.hadoop.util.Preconditions}}

 +The plan is as follows+
 * implement  {{org.apache.hadoop.util.Preconditions.checkArgument}} with the 
minimum set of interface used in the current hadoop repo.
 * we can replace {{guava.Preconditions}} by 
{{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
implemented.
 * We need the change to be easily to be backported in 3.x.


A previous jira HADOOP-17126 was created to replace CheckNotNull. Another will 
be created to implement checkState.

CC: [~ste...@apache.org], [~vjasani]


> implement non-guava Precondition checkArgument
> --
>
> Key: HADOOP-17929
> URL: https://issues.apache.org/jira/browse/HADOOP-17929
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to add the implementation {{checkArgument}} to the existing 
> class {{org.apache.hadoop.util.Preconditions}}
> +The plan is as follows+
>  * implement {{org.apache.hadoop.util.Preconditions.checkArgument}} with the 
> minimum set of interface used in the current hadoop repo.
>  * we can replace {{guava.Preconditions}} by 
> {{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
> implemented.
>  * We need the change to be easily to be backported in 3.x.
> A previous jira HADOOP-17126 was created to replace CheckNotNull. 
> HADOOP-17930 is created to implement checkState.
> CC: [~ste...@apache.org], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17930) implement non-guava Precondition checkState

2021-09-22 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17930:
--

 Summary: implement non-guava Precondition checkState
 Key: HADOOP-17930
 URL: https://issues.apache.org/jira/browse/HADOOP-17930
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.4.0, 3.2.3, 3.3.2
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


As part In order to replace Guava Preconditions, we need to implement our own 
versions of the API.
 This Jira is to add the implementation {{checkState}} to the existing class 
{{org.apache.hadoop.util.Preconditions}}

 +The plan is as follows+
 * implement  {{org.apache.hadoop.util.Preconditions.checkState}} with the 
minimum set of interface used in the current hadoop repo.
 * we can replace {{guava.Preconditions}} by 
{{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
implemented (both this jira and HADOOP-17929 are complete).
 * We need the change to be easily to be backported in 3.x.


previous jiras:
* HADOOP-17126 was created to replace CheckNotNull.
* HADOOP-17929 replacing checkArgument.

CC: [~ste...@apache.org], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17929) implement non-guava Precondition checkArgument

2021-09-22 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17929:
--

 Summary: implement non-guava Precondition checkArgument
 Key: HADOOP-17929
 URL: https://issues.apache.org/jira/browse/HADOOP-17929
 Project: Hadoop Common
  Issue Type: Sub-task
Affects Versions: 3.4.0, 3.2.3, 3.3.2
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


As part In order to replace Guava Preconditions, we need to implement our own 
versions of the API.
 This Jira is to add the implementation {{checkArgument}} to the existing class 
{{org.apache.hadoop.util.Preconditions}}

 +The plan is as follows+
 * implement  {{org.apache.hadoop.util.Preconditions.checkArgument}} with the 
minimum set of interface used in the current hadoop repo.
 * we can replace {{guava.Preconditions}} by 
{{org.apache.hadoop.util.Preconditions}} once all the interfaces have been 
implemented.
 * We need the change to be easily to be backported in 3.x.


A previous jira HADOOP-17126 was created to replace CheckNotNull. Another will 
be created to implement checkState.

CC: [~ste...@apache.org], [~vjasani]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17903) javadoc broken in branch-2.10 root

2021-09-08 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17903:
--

 Summary: javadoc broken in branch-2.10 root
 Key: HADOOP-17903
 URL: https://issues.apache.org/jira/browse/HADOOP-17903
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.10.2
Reporter: Ahmed Hussein


I went through some of the qbt-reports email reports.
I noticed that javadoc root was failing for quite sometime.


{code:bash}
[INFO] --- maven-javadoc-plugin:3.3.0:javadoc (default-cli) @ hadoop-main ---
[WARNING] Error injecting: org.apache.maven.plugins.javadoc.JavadocReport
java.lang.TypeNotPresentException: Type 
org.apache.maven.plugins.javadoc.JavadocReport not present
at 
org.eclipse.sisu.space.URLClassSpace.loadClass(URLClassSpace.java:147)
at org.eclipse.sisu.space.NamedClass.load(NamedClass.java:46)
at 
org.eclipse.sisu.space.AbstractDeferredClass.get(AbstractDeferredClass.java:48)
at 
com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:81)
at 
com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:53)
at 
com.google.inject.internal.ProviderInternalFactory$1.call(ProviderInternalFactory.java:65)
at 
com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:115)
at 
org.eclipse.sisu.bean.BeanScheduler$Activator.onProvision(BeanScheduler.java:176)
at 
com.google.inject.internal.ProvisionListenerStackCallback$Provision.provision(ProvisionListenerStackCallback.java:126)
at 
com.google.inject.internal.ProvisionListenerStackCallback.provision(ProvisionListenerStackCallback.java:68)
at 
com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:63)
at 
com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:45)
at 
com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1016)
at 
com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092)
at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1012)
at org.eclipse.sisu.inject.Guice4$1.get(Guice4.java:162)
at org.eclipse.sisu.inject.LazyBeanEntry.getValue(LazyBeanEntry.java:81)
at 
org.eclipse.sisu.plexus.LazyPlexusBean.getValue(LazyPlexusBean.java:51)
at 
org.codehaus.plexus.DefaultPlexusContainer.lookup(DefaultPlexusContainer.java:263)
at 
org.codehaus.plexus.DefaultPlexusContainer.lookup(DefaultPlexusContainer.java:255)
at 
org.apache.maven.plugin.internal.DefaultMavenPluginManager.getConfiguredMojo(DefaultMavenPluginManager.java:517)
at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:121)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:207)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:116)
at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:80)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:863)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:288)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:199)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:607)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at 
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
Caused by: java.lang.UnsupportedClassVersionError: 
org/apache/maven/plugins/javadoc/JavadocReport : Unsupported major.minor 
version 52.0
at 

[jira] [Commented] (HADOOP-17563) Update Bouncy Castle to 1.68 or later

2021-09-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411976#comment-17411976
 ] 

Ahmed Hussein commented on HADOOP-17563:


Spark Asm has been updated in 2020  [SPARK-29729][BUILD] Upgrade ASM to 7.2 
[PR-26373| https://github.com/apache/spark/pull/26373] 
I created [PR-3405| https://github.com/apache/spark/pull/3405] to upgrade 
bouncy to 1.69

> Update Bouncy Castle to 1.68 or later
> -
>
> Key: HADOOP-17563
> URL: https://issues.apache.org/jira/browse/HADOOP-17563
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.1
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.-
> Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68.
>  * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939]
>  * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052]
>  * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17898) Upgrade BouncyCastle to 1.69

2021-09-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411953#comment-17411953
 ] 

Ahmed Hussein commented on HADOOP-17898:


I created [PR-3405|https://github.com/apache/hadoop/pull/3405] for trunk.
Since spark is on bcprov-jdk15on-1.58, then it should be safe to upgrade 
branch-2.10 from bcprov-jdk16 to at least bcprov-jdk15on-1.58. I will file a 
separate Jira for branch-2.10

> Upgrade BouncyCastle to 1.69
> 
>
> Key: HADOOP-17898
> URL: https://issues.apache.org/jira/browse/HADOOP-17898
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Vulnerabilities reported in BouncyCastle:
> [CVE-2020-26939|https://nvd.nist.gov/vuln/detail/CVE-2020-26939] moderate 
> severity
> [CVE-2020-15522|https://nvd.nist.gov/vuln/detail/CVE-2020-15522] moderate 
> severity
> Affecting releases before 1.66.
>  
> Upgrade to latest 1.69.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17898) Upgrade BouncyCastle to 1.69

2021-09-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411884#comment-17411884
 ] 

Ahmed Hussein commented on HADOOP-17898:


Thank you [~tasanuma], [~weichiu] and [~ste...@apache.org]

> Upgrade BouncyCastle to 1.69
> 
>
> Key: HADOOP-17898
> URL: https://issues.apache.org/jira/browse/HADOOP-17898
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Vulnerabilities reported in BouncyCastle:
> [CVE-2020-26939|https://nvd.nist.gov/vuln/detail/CVE-2020-26939] moderate 
> severity
> [CVE-2020-15522|https://nvd.nist.gov/vuln/detail/CVE-2020-15522] moderate 
> severity
> Affecting releases before 1.66.
>  
> Upgrade to latest 1.69.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17898) Upgrade BouncyCastle to 1.69

2021-09-07 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411441#comment-17411441
 ] 

Ahmed Hussein commented on HADOOP-17898:


Created two Pull-requests
* branch-2.10: was still running bcprov-jdk16.  So there was no need to apply 
the patches in HADOOP-15832  as shading is different compared to 3.x
* trunk: bumping the version of BouncyCastle

[~rkanter], [~aajisaka] can you please take a look at the Pull requests?

> Upgrade BouncyCastle to 1.69
> 
>
> Key: HADOOP-17898
> URL: https://issues.apache.org/jira/browse/HADOOP-17898
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Vulnerabilities reported in BouncyCastle:
> [CVE-2020-26939|https://nvd.nist.gov/vuln/detail/CVE-2020-26939] moderate 
> severity
> [CVE-2020-15522|https://nvd.nist.gov/vuln/detail/CVE-2020-15522] moderate 
> severity
> Affecting releases before 1.66.
>  
> Upgrade to latest 1.69.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17898) Upgrade BouncyCastle to 1.69

2021-09-07 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17898 started by Ahmed Hussein.
--
> Upgrade BouncyCastle to 1.69
> 
>
> Key: HADOOP-17898
> URL: https://issues.apache.org/jira/browse/HADOOP-17898
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Vulnerabilities reported in BouncyCastle:
> [CVE-2020-26939|https://nvd.nist.gov/vuln/detail/CVE-2020-26939] moderate 
> severity
> [CVE-2020-15522|https://nvd.nist.gov/vuln/detail/CVE-2020-15522] moderate 
> severity
> Affecting releases before 1.66.
>  
> Upgrade to latest 1.69.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17898) Upgrade BouncyCastle to 1.69

2021-09-07 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17898:
---
Affects Version/s: 2.10.2

> Upgrade BouncyCastle to 1.69
> 
>
> Key: HADOOP-17898
> URL: https://issues.apache.org/jira/browse/HADOOP-17898
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Vulnerabilities reported in BouncyCastle:
> [CVE-2020-26939|https://nvd.nist.gov/vuln/detail/CVE-2020-26939] moderate 
> severity
> [CVE-2020-15522|https://nvd.nist.gov/vuln/detail/CVE-2020-15522] moderate 
> severity
> Affecting releases before 1.66.
>  
> Upgrade to latest 1.69.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17898) Upgrade BouncyCastle to 1.69

2021-09-07 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17898:
--

 Summary: Upgrade BouncyCastle to 1.69
 Key: HADOOP-17898
 URL: https://issues.apache.org/jira/browse/HADOOP-17898
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.4.0, 3.2.3, 3.3.2
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Vulnerabilities reported in BouncyCastle:
[CVE-2020-26939|https://nvd.nist.gov/vuln/detail/CVE-2020-26939] moderate 
severity
[CVE-2020-15522|https://nvd.nist.gov/vuln/detail/CVE-2020-15522] moderate 
severity

Affecting releases before 1.66.
 
Upgrade to latest 1.69.

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17885) Upgrade JSON smart to 1.3.3 on branch-2.10

2021-09-07 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411279#comment-17411279
 ] 

Ahmed Hussein commented on HADOOP-17885:


Thank you [~jeagles]!

> Upgrade JSON smart to 1.3.3 on branch-2.10
> --
>
> Key: HADOOP-17885
> URL: https://issues.apache.org/jira/browse/HADOOP-17885
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Currently branch-2.10 is using JSON Smart 1.3.1 version which is vulnerable 
> to [link CVE-2021-27568|https://nvd.nist.gov/vuln/detail/CVE-2021-27568].
> We can upgrade the version to 1.3.1.
> +Description of the vulnerability:+
> {quote}An issue was discovered in netplex json-smart-v1 through 2015-10-23 
> and json-smart-v2 through 2.4. An exception is thrown from a function, but it 
> is not caught, as demonstrated by NumberFormatException. When it is not 
> caught, it may cause programs using the library to crash or expose sensitive 
> information.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17886) Upgrade ant to 1.10.11

2021-09-07 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17411280#comment-17411280
 ] 

Ahmed Hussein commented on HADOOP-17886:


Thank you [~jeagles] and [~aajisaka]

> Upgrade ant to 1.10.11
> --
>
> Key: HADOOP-17886
> URL: https://issues.apache.org/jira/browse/HADOOP-17886
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.2.2, 3.4.0, 2.10.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Vulnerabilities reported in org.apache.ant:ant:1.10.9
>  * [CVE-2021-36374|https://nvd.nist.gov/vuln/detail/CVE-2021-36374] moderate 
> severity
>  * [CVE-2021-36373|https://nvd.nist.gov/vuln/detail/CVE-2021-36373] moderate 
> severity
> suggested: org.apache.ant:ant ~> 1.10.11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17886) Upgrade ant to 1.10.11

2021-09-01 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17886 started by Ahmed Hussein.
--
> Upgrade ant to 1.10.11
> --
>
> Key: HADOOP-17886
> URL: https://issues.apache.org/jira/browse/HADOOP-17886
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.2.2, 3.4.0, 2.10.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Vulnerabilities reported in org.apache.ant:ant:1.10.9
>  * [CVE-2021-36374|https://nvd.nist.gov/vuln/detail/CVE-2021-36374] moderate 
> severity
>  * [CVE-2021-36373|https://nvd.nist.gov/vuln/detail/CVE-2021-36373] moderate 
> severity
> suggested: org.apache.ant:ant ~> 1.10.11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17885) Upgrade JSON smart to 1.3.3 on branch-2.10

2021-09-01 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17885 started by Ahmed Hussein.
--
> Upgrade JSON smart to 1.3.3 on branch-2.10
> --
>
> Key: HADOOP-17885
> URL: https://issues.apache.org/jira/browse/HADOOP-17885
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 2.10.1
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Currently branch-2.10 is using JSON Smart 1.3.1 version which is vulnerable 
> to [link CVE-2021-27568|https://nvd.nist.gov/vuln/detail/CVE-2021-27568].
> We can upgrade the version to 1.3.1.
> +Description of the vulnerability:+
> {quote}An issue was discovered in netplex json-smart-v1 through 2015-10-23 
> and json-smart-v2 through 2.4. An exception is thrown from a function, but it 
> is not caught, as demonstrated by NumberFormatException. When it is not 
> caught, it may cause programs using the library to crash or expose sensitive 
> information.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17886) Upgrade ant to 1.10.11

2021-09-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17886:
--

 Summary: Upgrade ant to 1.10.11
 Key: HADOOP-17886
 URL: https://issues.apache.org/jira/browse/HADOOP-17886
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2.2, 3.3.0, 3.4.0, 2.10.2
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Vulnerabilities reported in org.apache.ant:ant:1.10.9
 * [CVE-2021-36374|https://nvd.nist.gov/vuln/detail/CVE-2021-36374] moderate 
severity
 * [CVE-2021-36373|https://nvd.nist.gov/vuln/detail/CVE-2021-36373] moderate 
severity

suggested: org.apache.ant:ant ~> 1.10.11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17885) Upgrade JSON smart to 1.3.3 on branch-2.10

2021-09-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17885:
--

 Summary: Upgrade JSON smart to 1.3.3 on branch-2.10
 Key: HADOOP-17885
 URL: https://issues.apache.org/jira/browse/HADOOP-17885
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.10.1, 2.10.0
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Currently branch-2.10 is using JSON Smart 1.3.1 version which is vulnerable to 
[link CVE-2021-27568|https://nvd.nist.gov/vuln/detail/CVE-2021-27568].

We can upgrade the version to 1.3.1.

+Description of the vulnerability:+

{quote}An issue was discovered in netplex json-smart-v1 through 2015-10-23 and 
json-smart-v2 through 2.4. An exception is thrown from a function, but it is 
not caught, as demonstrated by NumberFormatException. When it is not caught, it 
may cause programs using the library to crash or expose sensitive 
information.{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.13 and httpcore to 4.4.13

2021-08-31 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407628#comment-17407628
 ] 

Ahmed Hussein commented on HADOOP-17223:


I have backported the changes to branch-2.10 in 
[PR-3363|https://github.com/apache/hadoop/pull/3363]
[~pranavbheda], [~aajisaka] can you please take a look at the changes?

> update  org.apache.httpcomponents:httpclient to 4.5.13 and httpcore to 4.4.13
> -
>
> Key: HADOOP-17223
> URL: https://issues.apache.org/jira/browse/HADOOP-17223
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Pranav Bheda
>Assignee: Pranav Bheda
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-17223.001.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Update the dependencies
>  * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12
>  * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17857) Check real user ACLs in addition to proxied user ACLs

2021-08-31 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17407373#comment-17407373
 ] 

Ahmed Hussein commented on HADOOP-17857:


Hey [~epayne], there is still extra white space in the checkstyle errors.
Otherwise I am +1 (non-binding).

> Check real user ACLs in addition to proxied user ACLs
> -
>
> Key: HADOOP-17857
> URL: https://issues.apache.org/jira/browse/HADOOP-17857
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.2, 2.10.1, 3.3.1
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: HADOOP-17857.001.patch, HADOOP-17857.002.patch
>
>
> In a secure cluster, it is possible to configure the services to allow a 
> super-user to proxy to a regular user and perform actions on behalf of the 
> proxied user (see [Proxy user - Superusers Acting On Behalf Of Other 
> Users|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]).
> This is useful for automating server access for multiple different users in a 
> multi-tenant cluster. For example, this can be used by a super user 
> submitting jobs to a YARN queue, accessing HDFS files, scheduling Oozie 
> workflows, etc, which will then execute the service as the proxied user.
> Usually when these services check ACLs to determine if the user has access to 
> the requested resources, the service only needs to check the ACLs for the 
> proxied user. However, it is sometimes desirable to allow the proxied user to 
> have access to the resources when only the real user has open ACLs.
> For instance, let's say the user {{adm}} is the only user with submit ACLs to 
> the {{dataload}} queue, and the {{adm}} user wants to submit apps to the 
> {{dataload}} queue on behalf of users {{headless1}} and {{headless2}}. In 
> addition, we want to be able to bill {{headless1}} and {{headless2}} 
> separately for the YARN resources used in the {{dataload}} queue. In order to 
> do this, the apps need to run in the {{dataload}} queue as the respective 
> headless users. We could open up the ACLs to the {{dataload}} queue to allow 
> {{headless1}} and {{headless2}} to submit apps. But this would allow those 
> users to submit any app to that queue, and not be limited to just the data 
> loading apps, and we don't trust the {{headless1}} and {{headless2}} owners 
> to honor that restriction.
> This JIRA proposes that we define a way to set up ACLs to restrict a 
> resource's access to a  super-user, but when the access happens, run it as 
> the proxied user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17857) Check real user ACLs in addition to proxied user ACLs

2021-08-24 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17403868#comment-17403868
 ] 

Ahmed Hussein commented on HADOOP-17857:


Thanks [~epayne] for the patch.
The changes look good to me.
Can you please fix the checkstyle below?
{code:bash}
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java:477:
String REAL_USER = "realUser";:12: Name 'REAL_USER' must match pattern 
'^[a-z][a-zA-Z0-9]*$'. [LocalVariableName]
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java:483:
realUserUgi, new String [] { "group1" });:37: 'String' is followed 
by whitespace. [NoWhitespaceAfter]
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestAccessControlList.java:483:
realUserUgi, new String [] { "group1" });:40: '{' is followed by 
whitespace. [NoWhitespaceAfter]
{code}
?




> Check real user ACLs in addition to proxied user ACLs
> -
>
> Key: HADOOP-17857
> URL: https://issues.apache.org/jira/browse/HADOOP-17857
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.2, 2.10.1, 3.3.1
>Reporter: Eric Payne
>Assignee: Eric Payne
>Priority: Major
> Attachments: HADOOP-17857.001.patch
>
>
> In a secure cluster, it is possible to configure the services to allow a 
> super-user to proxy to a regular user and perform actions on behalf of the 
> proxied user (see [Proxy user - Superusers Acting On Behalf Of Other 
> Users|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/Superusers.html]).
> This is useful for automating server access for multiple different users in a 
> multi-tenant cluster. For example, this can be used by a super user 
> submitting jobs to a YARN queue, accessing HDFS files, scheduling Oozie 
> workflows, etc, which will then execute the service as the proxied user.
> Usually when these services check ACLs to determine if the user has access to 
> the requested resources, the service only needs to check the ACLs for the 
> proxied user. However, it is sometimes desirable to allow the proxied user to 
> have access to the resources when only the real user has open ACLs.
> For instance, let's say the user {{adm}} is the only user with submit ACLs to 
> the {{dataload}} queue, and the {{adm}} user wants to submit apps to the 
> {{dataload}} queue on behalf of users {{headless1}} and {{headless2}}. In 
> addition, we want to be able to bill {{headless1}} and {{headless2}} 
> separately for the YARN resources used in the {{dataload}} queue. In order to 
> do this, the apps need to run in the {{dataload}} queue as the respective 
> headless users. We could open up the ACLs to the {{dataload}} queue to allow 
> {{headless1}} and {{headless2}} to submit apps. But this would allow those 
> users to submit any app to that queue, and not be limited to just the data 
> loading apps, and we don't trust the {{headless1}} and {{headless2}} owners 
> to honor that restriction.
> This JIRA proposes that we define a way to set up ACLs to restrict a 
> resource's access to a  super-user, but when the access happens, run it as 
> the proxied user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-07-09 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17378100#comment-17378100
 ] 

Ahmed Hussein commented on HADOOP-16206:


Hi [~zhangduo], Are there any updates?

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17769:
---
Affects Version/s: (was: 2.10.0)
   3.2.3
   2.10.2
   3.3.1

> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17769:
---
Affects Version/s: 3.4.0
   2.10.0

> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17366769#comment-17366769
 ] 

Ahmed Hussein commented on HADOOP-17769:


I have created two Pull requests for both banch-2.10 and trunk.
I believe this fix is needed for other 3.x too.

I verified Junit-4.13.1 had the bug and would cause the exceptions described in 
HDFS-16072.

[~ayushtkn] can you please take a look at the upgrade and commit it to both 
branches?


> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-21 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17769 started by Ahmed Hussein.
--
> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-21 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17769:
--

 Summary: Upgrade JUnit to 4.13.2
 Key: HADOOP-17769
 URL: https://issues.apache.org/jira/browse/HADOOP-17769
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


JUnit 4.13.1 has a bug that is reported in Junit 
[issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
ThreadGroups should not be destroyed_

After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  started 
to fail regularly in branch-3.x and branch-2.10.
While investigating the failure in branch-2.10 HDFS-16072, I found out that the 
bug is the main reason {{TestBlockRecovery}}  started to fail because the 
timeout of the Junit would try to close a ThreadGroup that has been already 
closed which throws the {{java.lang.IllegalThreadStateException}}.

The bug has been fixed in Junit-4.13.2

For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-15 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17762:
--

 Summary: branch-2.10 daily build fails to pull latest changes
 Key: HADOOP-17762
 URL: https://issues.apache.org/jira/browse/HADOOP-17762
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, yetus
Affects Versions: 2.10.1
Reporter: Ahmed Hussein


I noticed that the build for branch-2.10 failed to pull the latest changes for 
the last few days.

CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]

https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console

{code:bash}
Started by timer
Running as SYSTEM
Building remotely on H20 (Hadoop) in workspace 
/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
The recommended git tool is: NONE
No credentials specified
Cloning the remote Git repository
Using shallow clone with depth 10
Avoid fetching tags
Cloning repository https://github.com/apache/hadoop
ERROR: Failed to clean the workspace
jenkins.util.io.CompositeIOException: Unable to delete 
'/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
 Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. (Discarded 
1 additional exceptions)
at 
jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
at hudson.Util.deleteContentsRecursive(Util.java:262)
at hudson.Util.deleteContentsRecursive(Util.java:251)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
at hudson.remoting.UserRequest.perform(UserRequest.java:211)
at hudson.remoting.UserRequest.perform(UserRequest.java:54)
at hudson.remoting.Request$2.run(Request.java:375)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.nio.file.AccessDeniedException: 
/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
at 
sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at 
sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:457)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 
jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
at 
jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
at 

[jira] [Resolved] (HADOOP-17463) Replace currentTimeMillis with monotonicNow in elapsed time

2021-06-04 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein resolved HADOOP-17463.

Release Note: see discussion in HADOOP-15901
  Resolution: Won't Fix

> Replace currentTimeMillis with monotonicNow in elapsed time
> ---
>
> Key: HADOOP-17463
> URL: https://issues.apache.org/jira/browse/HADOOP-17463
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> I noticed that there is a widespread incorrect usage of 
> {{System.currentTimeMillis()}}  throughout the hadoop code.
> For example:
> {code:java}
> // Some comments here
> long start = System.currentTimeMillis();
> while (System.currentTimeMillis() - start < timeout) {
>   // Do something
> }
> {code}
> Elapsed time should be measured using `monotonicNow()`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-26 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351835#comment-17351835
 ] 

Ahmed Hussein commented on HADOOP-17152:


Hi [~vjasani], thank you for offering help working on those issues.

I have assigned HADOOP-17114 to you.

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17114) Replace Guava initialization of Lists.newArrayList

2021-05-26 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reassigned HADOOP-17114:
--

Assignee: Viraj Jasani  (was: Ahmed Hussein)

> Replace Guava initialization of Lists.newArrayList
> --
>
> Key: HADOOP-17114
> URL: https://issues.apache.org/jira/browse/HADOOP-17114
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> There are unjustified use of Guava APIs to initialize LinkedLists and 
> ArrayLists. This could be simply replaced by Java API.
> By analyzing hadoop code, the best way to replace guava  is to do the 
> following steps:
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)
>  
> After this class is created, we can simply replace the import statement in 
> all the source code.
>  
> {code:java}
> Targets
> Occurrences of 'com.google.common.collect.Lists;' in project with mask 
> '*.java'
> Found Occurrences  (246 usages found)
> org.apache.hadoop.conf  (1 usage found)
> TestReconfiguration.java  (1 usage found)
> 22 import com.google.common.collect.Lists;
> org.apache.hadoop.crypto  (1 usage found)
> CryptoCodec.java  (1 usage found)
> 35 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.azurebfs  (3 usages found)
> ITestAbfsIdentityTransformer.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> ITestAzureBlobFilesystemAcl.java  (1 usage found)
> 21 import com.google.common.collect.Lists;
> ITestAzureBlobFileSystemCheckAccess.java  (1 usage found)
> 20 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.http.client  (2 usages found)
> BaseTestHttpFSWith.java  (1 usage found)
> 77 import com.google.common.collect.Lists;
> HttpFSFileSystem.java  (1 usage found)
> 75 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.permission  (2 usages found)
> AclStatus.java  (1 usage found)
> 27 import com.google.common.collect.Lists;
> AclUtil.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a  (3 usages found)
> ITestS3AFailureHandling.java  (1 usage found)
> 23 import com.google.common.collect.Lists;
> ITestS3GuardListConsistency.java  (1 usage found)
> 34 import com.google.common.collect.Lists;
> S3AUtils.java  (1 usage found)
> 57 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> RolePolicies.java  (1 usage found)
> 26 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit  (2 usages found)
> ITestCommitOperations.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> TestMagicCommitPaths.java  (1 usage found)
> 25 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.commit.staging  (3 usages found)
> StagingTestBase.java  (1 usage found)
> 47 import com.google.common.collect.Lists;
> TestStagingPartitionedFileListing.java  (1 usage found)
> 31 import com.google.common.collect.Lists;
> TestStagingPartitionedTaskCommit.java  (1 usage found)
> 28 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.impl  (2 usages found)
> RenameOperation.java  (1 usage found)
> 30 import com.google.common.collect.Lists;
> TestPartialDeleteFailures.java  (1 usage found)
> 37 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.s3a.s3guard  (3 usages found)
> DumpS3GuardDynamoTable.java  (1 usage found)
> 38 import com.google.common.collect.Lists;
> DynamoDBMetadataStore.java  (1 usage found)
> 67 import com.google.common.collect.Lists;
> ITestDynamoDBMetadataStore.java  (1 usage found)
> 49 import com.google.common.collect.Lists;
> org.apache.hadoop.fs.shell  (1 usage found)
> AclCommands.java  (1 usage found)
> 25 import 

[jira] [Commented] (HADOOP-17126) implement non-guava Precondition checkNotNull

2021-05-25 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17351336#comment-17351336
 ] 

Ahmed Hussein commented on HADOOP-17126:


[~busbey] and [~vjasani], Can you please take a look at  [GitHub Pull Request 
#3050|https://github.com/apache/hadoop/pull/3050] ?
This will push the effort to replace guava since the {{guava.Preconditions}} 
class is widely used in Hadoop.

> implement non-guava Precondition checkNotNull
> -
>
> Key: HADOOP-17126
> URL: https://issues.apache.org/jira/browse/HADOOP-17126
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}.
>  +The plan is as follows+
>  * create a new {{package org.apache.hadoop.util.unguava;}}
>  * {{create class Validate}}
>  * implement  {{package org.apache.hadoop.util.unguava.Validate;}} with the 
> following interface
>  ** {{checkNotNull(final T obj)}}
>  ** {{checkNotNull(final T reference, final Object errorMessage)}}
>  ** {{checkNotNull(final T obj, final String message, final Object... 
> values)}}
>  ** {{checkNotNull(final T obj,final Supplier msgSupplier)}}
>  * {{guava.preconditions used String.lenientformat which suppressed 
> exceptions caused by string formatting of the exception message . So, in 
> order to avoid changing the behavior, the implementation catches Exceptions 
> triggered by building the message (IllegalFormat, InsufficientArg, 
> NullPointer..etc)}}
>  * {{After merging the new class, we can replace 
> guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull
>  * We need the change to go into trunk, 3.1, 3.2, and 3.3
>  
> Similar Jiras will be created to implement checkState, checkArgument, 
> checkIndex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-05-24 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17350561#comment-17350561
 ] 

Ahmed Hussein commented on HADOOP-16206:


{quote}The problem is about compatibility... I'm sure that some end users will 
use their own scripts to start a hadoop cluster, this change may break their 
scripts.
{quote}
log4j used to be controlled through the properties files. If the new parameters 
maintain the same default behavior (for each individual module), then this 
should be acceptable.
{quote}this change may break their scripts.
{quote}
This is a good point. However, this is a part of pulling changes from trunk 
into the remote forks. I do not see that a big concern as long as we are not 
modifying a release.
 Doing that in trunk should be acceptable.

[~aajisaka] and [~weichiu] Do you guys have any thoughts?

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2021-05-24 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17350467#comment-17350467
 ] 

Ahmed Hussein commented on HADOOP-17115:


Apologies for not following up with reviews as I was OOO for the last two weeks.
Thanks [~busbey] and [~vjasani] for getting this code merged.

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and 
> hadoop-tools
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = 

[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-05-24 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17350465#comment-17350465
 ] 

Ahmed Hussein commented on HADOOP-16206:


{quote}So any suggestions here? Should we do the same trick in HBase, or 
someone could find a more compatible way?{quote}
Thanks [~zhangduo] for update.
How difficult it is to add those two system properties to the Hadoop entire 
build? How easy would it be to switch the level to DEBUG for testing and 
debugging purposes?


> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-05 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17339856#comment-17339856
 ] 

Ahmed Hussein commented on HADOOP-17152:


As far as I remember, there were some calls to newArrayList that are not 
supported in ArrayList. Without a wrapper, you would have to replicate the code 
every time the newArrayList is not supported.

for instance,
{code:java}
 public static  ArrayList newArrayList(Iterable elements)
public static  ArrayList newArrayList(Iterator elements)
{code}

to replace newArrayList, there should be some code to iterate over the 
Iterable, adding the elements to the newly created list. I assume you do not 
want to do copy paste that code everywhere. Then, if you find a bug, you will 
have to go fix that everywhere.

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-05 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17339831#comment-17339831
 ] 

Ahmed Hussein commented on HADOOP-17152:


I assigned the jira to [~vjasani] as he suggested in (HADOOP-17115)

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17115) Replace Guava initialization of Sets.newHashSet

2021-05-05 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17339830#comment-17339830
 ] 

Ahmed Hussein commented on HADOOP-17115:


Thanks [~vjasani]!
Sure feel free to work on it.

> Replace Guava initialization of Sets.newHashSet
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/;,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
> 363 Set expectedSet = Sets.newHashSet("host4", 
> "host5");
> org.apache.hadoop.hdfs.qjournal.server  (2 usages found)
> JournalNodeSyncer.java  (2 usages found)
>

[jira] [Assigned] (HADOOP-17152) Implement wrapper for guava newArrayList and newLinkedList

2021-05-05 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reassigned HADOOP-17152:
--

Assignee: Viraj Jasani  (was: Ahmed Hussein)

> Implement wrapper for guava newArrayList and newLinkedList
> --
>
> Key: HADOOP-17152
> URL: https://issues.apache.org/jira/browse/HADOOP-17152
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>
> guava Lists class provide some wrappers to java ArrayList and LinkedList.
> Replacing the method calls throughout the code can be invasive because guava 
> offers some APIs that do not exist in java util. This Jira is the task of 
> implementing those missing APIs in hadoop common in a step toward getting rid 
> of guava.
>  * create a wrapper class org.apache.hadoop.util.unguava.Lists 
>  * implement the following interfaces in Lists:
>  ** public static  ArrayList newArrayList()
>  ** public static  ArrayList newArrayList(E... elements)
>  ** public static  ArrayList newArrayList(Iterable 
> elements)
>  ** public static  ArrayList newArrayList(Iterator 
> elements)
>  ** public static  ArrayList newArrayListWithCapacity(int 
> initialArraySize)
>  ** public static  LinkedList newLinkedList()
>  ** public static  LinkedList newLinkedList(Iterable 
> elements)
>  ** public static  List asList(@Nullable E first, E[] rest)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-21 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17326683#comment-17326683
 ] 

Ahmed Hussein commented on HADOOP-16206:


Thanks [~zhangduo] for the update.

Just curious if you you know about 
[Log4j1ConfigurationConverter.java|https://logging.apache.org/log4j/2.x/log4j-1.2-api/apidocs/org/apache/log4j/config/Log4j1ConfigurationConverter.html]?
 I haven't used it before, but it is worth it to give it a try before writing a 
helper tool from scratch.

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-14 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reassigned HADOOP-16206:
--

Assignee: Duo Zhang

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-13 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320439#comment-17320439
 ] 

Ahmed Hussein commented on HADOOP-16206:


[~zhangduo] Sure, Go ahead and give it a try!

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-13 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320140#comment-17320140
 ] 

Ahmed Hussein commented on HADOOP-16206:


Thanks [~weichiu] and [~zhangduo] for the suggestions. The approach of breaking 
this up seems a good idea.
 I propose a slightly different scheme of breaking-up the migration.

Instead of "per-module", the migration would be split into two phases:
 # Phase-1: Get it to work. Straightforward replacement of log4j
   **  IMHO, it will be better to aim for log4j2 skipping the bridge way.
   ** Reviewer only needs to check that the tests pass and the new 
configurations are not causing inconsistence.
   ** this implies that suggestions for tuning/enhancements should be noted but 
not immediately applied.
 # Phase-2: Post migration. Tuning and optimizations
   ** After the migration is done, separate Jiras are filed to tune the logging.
   ** Separate tickets can be issued to address performance evaluations and 
exploration of other features such as Async, garbage-free, etc..

The approach of two-phases migration will reduce the burden and logically 
separate between actual migration Vs. addressing suggestions and tuning 
requests.

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-13 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17320140#comment-17320140
 ] 

Ahmed Hussein edited comment on HADOOP-16206 at 4/13/21, 12:39 PM:
---

Thanks [~weichiu] and [~zhangduo] for the suggestions. The approach of breaking 
this up seems a good idea.
 I propose a slightly different scheme of breaking-up the migration.

Instead of "per-module", the migration would be split into two phases:
 # Phase-1: Get it to work. Straightforward replacement of log4j
   **  IMHO, it will be better to aim for log4j2 skipping the bridge API.
   ** Reviewer only needs to check that the tests pass and the new 
configurations are not causing inconsistence.
   ** this implies that suggestions for tuning/enhancements should be noted but 
not immediately applied.
 # Phase-2: Post migration. Tuning and optimizations
   ** After the migration is done, separate Jiras are filed to tune the logging.
   ** Separate tickets can be issued to address performance evaluations and 
exploration of other features such as Async, garbage-free, etc..

The approach of two-phases migration will reduce the burden and logically 
separate between actual migration Vs. addressing suggestions and tuning 
requests.


was (Author: ahussein):
Thanks [~weichiu] and [~zhangduo] for the suggestions. The approach of breaking 
this up seems a good idea.
 I propose a slightly different scheme of breaking-up the migration.

Instead of "per-module", the migration would be split into two phases:
 # Phase-1: Get it to work. Straightforward replacement of log4j
   **  IMHO, it will be better to aim for log4j2 skipping the bridge way.
   ** Reviewer only needs to check that the tests pass and the new 
configurations are not causing inconsistence.
   ** this implies that suggestions for tuning/enhancements should be noted but 
not immediately applied.
 # Phase-2: Post migration. Tuning and optimizations
   ** After the migration is done, separate Jiras are filed to tune the logging.
   ** Separate tickets can be issued to address performance evaluations and 
exploration of other features such as Async, garbage-free, etc..

The approach of two-phases migration will reduce the burden and logically 
separate between actual migration Vs. addressing suggestions and tuning 
requests.

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17625) Update to Jetty 9.4.39

2021-04-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317510#comment-17317510
 ] 

Ahmed Hussein commented on HADOOP-17625:


Thanks [~weichiu]!

> Update to Jetty 9.4.39
> --
>
> Key: HADOOP-17625
> URL: https://issues.apache.org/jira/browse/HADOOP-17625
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317384#comment-17317384
 ] 

Ahmed Hussein edited comment on HADOOP-16206 at 4/8/21, 6:05 PM:
-

{quote}So you suggest we do a benchmark first to get a clear evidence of 
performance gains in log4j2, and then start our work? Or you mean we could just 
start the work as you think we'd better fully move to log4j2?{quote}

The reason I was talking about performance evaluation is that quite often, 
there is a bias when presenting numbers of a new implementations.
As long as there are different resources acknowledging the [performance gains 
of log4j2|https://logging.apache.org/log4j/log4j-2.2/performance.html], then it 
should be fine to move forward. There is no need to re-invent the wheel.

I believe the garbage-free option is an interesting feature to use to reduce 
the objects allocation. This leads to less Garbage collection events.
After log4j2 is in, we can use heap analysis to evaluate different 
configurations like garbage-free options.

I agree with [~ste...@apache.org]'s  that it is good to know the performance 
with different log configurations. This probably can be done separately in a 
benchmark to get an approximate estimate of the tradeoffs.

I am little bit hesitated to work on this issue as I have started on replacing 
Guava APIs several months ago and it won't make sense to have such two big 
migrations on one plate.



was (Author: ahussein):
{quote}So you suggest we do a benchmark first to get a clear evidence of 
performance gains in log4j2, and then start our work? Or you mean we could just 
start the work as you think we'd better fully move to log4j2?{quote}

The reason I was talking about performance evaluation is that quite often, 
there is a bias when presenting numbers of a new implementations.
As long as there are different resources acknowledging the [performance gains 
of log4j2|https://logging.apache.org/log4j/log4j-2.2/performance.html], then it 
should be fine to move forward. There is no need to re-invent the wheel.

I believe the garbage-free option is an interesting feature to use to reduce 
the objects allocation. This leads to less Garbage collection events.
After log4j2 is in, we can use heap analysis to evaluate different 
configurations like garbage-free options.

I agree with Steve that it is good to know the performance with different log 
configurations. This probably can be done separately in a benchmark to get an 
approximate estimate of the tradeoffs.

I am little bit hesitated to work on this issue as I have started on replacing 
Guava APIs several months ago and it won't make sense to have such two big 
migrations on one plate.


> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-04-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17317384#comment-17317384
 ] 

Ahmed Hussein commented on HADOOP-16206:


{quote}So you suggest we do a benchmark first to get a clear evidence of 
performance gains in log4j2, and then start our work? Or you mean we could just 
start the work as you think we'd better fully move to log4j2?{quote}

The reason I was talking about performance evaluation is that quite often, 
there is a bias when presenting numbers of a new implementations.
As long as there are different resources acknowledging the [performance gains 
of log4j2|https://logging.apache.org/log4j/log4j-2.2/performance.html], then it 
should be fine to move forward. There is no need to re-invent the wheel.

I believe the garbage-free option is an interesting feature to use to reduce 
the objects allocation. This leads to less Garbage collection events.
After log4j2 is in, we can use heap analysis to evaluate different 
configurations like garbage-free options.

I agree with Steve that it is good to know the performance with different log 
configurations. This probably can be done separately in a benchmark to get an 
approximate estimate of the tradeoffs.

I am little bit hesitated to work on this issue as I have started on replacing 
Guava APIs several months ago and it won't make sense to have such two big 
migrations on one plate.


> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-03-30 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311504#comment-17311504
 ] 

Ahmed Hussein edited comment on HADOOP-16206 at 3/30/21, 12:59 PM:
---

Regarding the concerns that the downstream could use the network classes in 
log4j, those classes can be removed from the jar file without affecting Hadoop. 
Therefore, Security wise, the effort to migrate is not worthy.

If there is clear evidence of performance gains in log4j2, then this will be 
the real motivation to migrate. While I like the idea that the log4j bridge 
could reduce the work significantly, I believe that it would be better to fully 
move to log4j2. I think that the bridge may not last long given that it is not 
clear how its performance would compare to pure log4j2 implementation and how 
long support we get on the long run (i.e., future CVEs, using new JDKs..etc).



was (Author: ahussein):
Regarding the concerns that the downstream could use the network classes in 
log4j, those classes can be removed from the jar file without affecting Hadoop. 
Therefore, Security wise, the effort to migrate is not worthy.

If there is clear evidence of performance gains in log4j2, then this will be 
the real motivation to migrate. While I like the idea that the log4j bridge 
could reduce the work significantly, I believe that it would be better to fully 
move to log4j2. I just think that the bridge may not last long given that it is 
not clear how its performance would compare to pure log4j2 implementation and 
how long support we get on the long run (i.e., future CVEs, using new 
JDKs..etc).


> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-03-30 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17311504#comment-17311504
 ] 

Ahmed Hussein commented on HADOOP-16206:


Regarding the concerns that the downstream could use the network classes in 
log4j, those classes can be removed from the jar file without affecting Hadoop. 
Therefore, Security wise, the effort to migrate is not worthy.

If there is clear evidence of performance gains in log4j2, then this will be 
the real motivation to migrate. While I like the idea that the log4j bridge 
could reduce the work significantly, I believe that it would be better to fully 
move to log4j2. I just think that the bridge may not last long given that it is 
not clear how its performance would compare to pure log4j2 implementation and 
how long support we get on the long run (i.e., future CVEs, using new 
JDKs..etc).


> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2021-03-26 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309532#comment-17309532
 ] 

Ahmed Hussein commented on HADOOP-16870:


Oh I see.
 I applied diff on branch-2.10 [^HADOOP-16870.branch-2.10.001.patch] . 
Hopefully, it is going to provide some help.

> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: HADOOP-16870.branch-2.10.001.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2021-03-26 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-16870:
---
Attachment: HADOOP-16870.branch-2.10.001.patch

> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
> Attachments: HADOOP-16870.branch-2.10.001.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2021-03-25 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309148#comment-17309148
 ] 

Ahmed Hussein commented on HADOOP-16870:


Since spotbugs require JDK1.8.0+ run, do you think we should remove 
{{--findbugs-strict-precheck}} for branch-2.10?

> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2021-03-25 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309077#comment-17309077
 ] 

Ahmed Hussein commented on HADOOP-16870:


I believe that HADOOP-16870 is missing in branch-2.10.
It causes an error 
{code:bash}
ERROR: Unprocessed flag(s): --spotbugs-strict-precheck
{code}

We need to replace {{findbugs}} with {{spotbugs}}


> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2021-03-25 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17309077#comment-17309077
 ] 

Ahmed Hussein edited comment on HADOOP-16870 at 3/26/21, 1:54 AM:
--

I believe that HADOOP-16870 is missing in branch-2.10.
It causes an error 
{code:bash}
ERROR: Unprocessed flag(s): --spotbugs-strict-precheck
{code}

We need to replace {{findbugs}} with {{spotbugs}} in branch-2.10/jenkinsfile



was (Author: ahussein):
I believe that HADOOP-16870 is missing in branch-2.10.
It causes an error 
{code:bash}
ERROR: Unprocessed flag(s): --spotbugs-strict-precheck
{code}

We need to replace {{findbugs}} with {{spotbugs}}


> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17467) netgroup-user is not added to Groups.cache

2021-03-25 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308914#comment-17308914
 ] 

Ahmed Hussein commented on HADOOP-17467:


I see from the discussion in HADOOP-17079 that there are disagreements.
Therefore, I suggest that we fix the broken providers and file a separate Jira 
that to address reverting the unnecessary providers changes.
WDYT [~daryn], [~Jim_Brennan], [~xyao] ?

> netgroup-user is not added to Groups.cache
> --
>
> Key: HADOOP-17467
> URL: https://issues.apache.org/jira/browse/HADOOP-17467
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> After the optimization in HADOOP-17079, both 
> {{JniBasedUnixGroupsNetgroupMapping}} and 
> {{ShellBasedUnixGroupsNetgroupMapping}} do not implement {{getGroupSet}}.
> As a result, {{Groups.load()}} load the cache calling {{fetchGroupSet}} which 
> yield to the superclass {{JniBasedUnixGroupsMapping}} / 
> {{ShellBasedUnixGroupsMapping}}.
> In other words, the groups mapping will never fetch from {{NetgroupCache}}.
> This alters the behavior of the implementation. Is there a reason to bypass 
> loading. CC: [~xyao]
> This jira is to add missing implementation {{getGroupSet}} to 
> {{JniBasedUnixGroupsNetgroupMapping}} and 
> {{ShellBasedUnixGroupsNetgroupMapping}} .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17602) Upgrade JUnit to 4.13.1

2021-03-25 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17308747#comment-17308747
 ] 

Ahmed Hussein commented on HADOOP-17602:


Thank you [~aajisaka]!

> Upgrade JUnit to 4.13.1
> ---
>
> Key: HADOOP-17602
> URL: https://issues.apache.org/jira/browse/HADOOP-17602
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security, test
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Fix For: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3
>
> Attachments: HADOOP-17602.001.patch, 
> HADOOP-17602.branch-2.10.001.patch
>
>
> A reported vulnerability reported in JUnit4.7-4.13.
> The JUnit4 test rule [TemporaryFolder on unix-like systems does not limit 
> access to created 
> files|https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17602) Upgrade JUnit to 4.13.1

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17602:
---
Attachment: HADOOP-17602.branch-2.10.001.patch

> Upgrade JUnit to 4.13.1
> ---
>
> Key: HADOOP-17602
> URL: https://issues.apache.org/jira/browse/HADOOP-17602
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security, test
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17602.001.patch, 
> HADOOP-17602.branch-2.10.001.patch
>
>
> A reported vulnerability reported in JUnit4.7-4.13.
> The JUnit4 test rule [TemporaryFolder on unix-like systems does not limit 
> access to created 
> files|https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17603 started by Ahmed Hussein.
--
> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17603.branch-2.10.001.patch
>
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17603:
---
Attachment: HADOOP-17603.branch-2.10.001.patch
Status: Patch Available  (was: In Progress)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17603.branch-2.10.001.patch
>
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17603:
---
Component/s: security

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, security
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17603:
---
Priority: Major  (was: Minor)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17603:
---
Component/s: build

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17603:
---
Priority: Minor  (was: Major)

> Upgrade tomcat-embed-core to 7.0.108
> 
>
> Key: HADOOP-17603
> URL: https://issues.apache.org/jira/browse/HADOOP-17603
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Minor
>
> [CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
> severity.
> Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
> 7.0.0-7.0.107 are all affected by the vulnerability.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17602) Upgrade JUnit to 4.13.1

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17602:
---
Attachment: HADOOP-17602.001.patch
Status: Patch Available  (was: In Progress)

> Upgrade JUnit to 4.13.1
> ---
>
> Key: HADOOP-17602
> URL: https://issues.apache.org/jira/browse/HADOOP-17602
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17602.001.patch
>
>
> A reported vulnerability reported in JUnit4.7-4.13.
> The JUnit4 test rule [TemporaryFolder on unix-like systems does not limit 
> access to created 
> files|https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17602) Upgrade JUnit to 4.13.1

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17602 started by Ahmed Hussein.
--
> Upgrade JUnit to 4.13.1
> ---
>
> Key: HADOOP-17602
> URL: https://issues.apache.org/jira/browse/HADOOP-17602
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security, test
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> A reported vulnerability reported in JUnit4.7-4.13.
> The JUnit4 test rule [TemporaryFolder on unix-like systems does not limit 
> access to created 
> files|https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17603) Upgrade tomcat-embed-core to 7.0.108

2021-03-24 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17603:
--

 Summary: Upgrade tomcat-embed-core to 7.0.108
 Key: HADOOP-17603
 URL: https://issues.apache.org/jira/browse/HADOOP-17603
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


[CVE-2021-25329|https://nvd.nist.gov/vuln/detail/CVE-2021-25329] critical 
severity.
Impact: [CVE-2020-9494|https://nvd.nist.gov/vuln/detail/CVE-2020-9494]
7.0.0-7.0.107 are all affected by the vulnerability.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17601:
---
  Attachment: HADOOP-17601.branch-2.10.001.patch
Target Version/s:   (was: 2.10.1)
  Status: Patch Available  (was: In Progress)

> Upgrade Jackson databind in branch-2.10 to 2.9.10.7
> ---
>
> Key: HADOOP-17601
> URL: https://issues.apache.org/jira/browse/HADOOP-17601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17601.branch-2.10.001.patch
>
>
> Two known vulnerabilities found in Jackson-databind
> [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
> [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.7

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17601:
---
Summary: Upgrade Jackson databind in branch-2.10 to 2.9.10.7  (was: Upgrade 
Jackson databind in branch-2.10 to 2.9.10.6)

> Upgrade Jackson databind in branch-2.10 to 2.9.10.7
> ---
>
> Key: HADOOP-17601
> URL: https://issues.apache.org/jira/browse/HADOOP-17601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Two known vulnerabilities found in Jackson-databind
> [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
> [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.6

2021-03-24 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17601 started by Ahmed Hussein.
--
> Upgrade Jackson databind in branch-2.10 to 2.9.10.6
> ---
>
> Key: HADOOP-17601
> URL: https://issues.apache.org/jira/browse/HADOOP-17601
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Two known vulnerabilities found in Jackson-databind
> [CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
> [CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17602) Upgrade JUnit to 4.13.1

2021-03-24 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17602:
--

 Summary: Upgrade JUnit to 4.13.1
 Key: HADOOP-17602
 URL: https://issues.apache.org/jira/browse/HADOOP-17602
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


A reported vulnerability reported in JUnit4.7-4.13.
The JUnit4 test rule [TemporaryFolder on unix-like systems does not limit 
access to created 
files|https://github.com/junit-team/junit4/security/advisories/GHSA-269g-pwp5-87pp]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17601) Upgrade Jackson databind in branch-2.10 to 2.9.10.6

2021-03-24 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17601:
--

 Summary: Upgrade Jackson databind in branch-2.10 to 2.9.10.6
 Key: HADOOP-17601
 URL: https://issues.apache.org/jira/browse/HADOOP-17601
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


Two known vulnerabilities found in Jackson-databind

[CVE-2021-20190|https://nvd.nist.gov/vuln/detail/CVE-2021-20190] high severity
[CVE-2020-25649|https://nvd.nist.gov/vuln/detail/CVE-2020-25649] high severity



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17557) skip-dir option is not processed by Yetus

2021-03-08 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17557?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17297556#comment-17297556
 ] 

Ahmed Hussein commented on HADOOP-17557:


It was a simple and straightforward fix.
{{dev-support/bin/test-patch.sh}} line 18 had the flag {{--skip-dir}}, instead 
of {{--skip-dirs}}.
[~aajisaka], can you please take a look at the patch?

> skip-dir option is not processed by Yetus
> -
>
> Key: HADOOP-17557
> URL: https://issues.apache.org/jira/browse/HADOOP-17557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit, yetus
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17557.001.patch
>
>
> Running test patch locally does not work anymore after the Yetus upgrade
> {code:bash}
> dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
> patch-file.patch
> {code}
> Error is 
> {code:bash}
> Testing  patch on trunk.
> ERROR: Unprocessed flag(s): --skip-dir
> environment {
> SOURCEDIR = 'src'
> // will also need to change notification section below
> PATCHDIR = 'out'
> DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
> YETUS='yetus'
> // Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
> YETUS_VERSION='rel/0.13.0'
> /skip-
> # URL for user-side presentation in reports and such 
> to our artifacts
>  _ _ __
> |  ___|_ _(_) |_   _ _ __ ___| |
> | |_ / _` | | | | | | '__/ _ \ |
> |  _| (_| | | | |_| | | |  __/_|
> |_|  \__,_|_|_|\__,_|_|  \___(_)
> | Vote |Subsystem |  Runtime   | Comment
> 
> |  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
> {code}
> It seems that the "{{--skip-dir}}" option supported Yetus release prior to 
> 0.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17557) skip-dir option is not processed by Yetus

2021-03-08 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17557:
---
Attachment: HADOOP-17557.001.patch
Status: Patch Available  (was: In Progress)

> skip-dir option is not processed by Yetus
> -
>
> Key: HADOOP-17557
> URL: https://issues.apache.org/jira/browse/HADOOP-17557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit, yetus
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17557.001.patch
>
>
> Running test patch locally does not work anymore after the Yetus upgrade
> {code:bash}
> dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
> patch-file.patch
> {code}
> Error is 
> {code:bash}
> Testing  patch on trunk.
> ERROR: Unprocessed flag(s): --skip-dir
> environment {
> SOURCEDIR = 'src'
> // will also need to change notification section below
> PATCHDIR = 'out'
> DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
> YETUS='yetus'
> // Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
> YETUS_VERSION='rel/0.13.0'
> /skip-
> # URL for user-side presentation in reports and such 
> to our artifacts
>  _ _ __
> |  ___|_ _(_) |_   _ _ __ ___| |
> | |_ / _` | | | | | | '__/ _ \ |
> |  _| (_| | | | |_| | | |  __/_|
> |_|  \__,_|_|_|\__,_|_|  \___(_)
> | Vote |Subsystem |  Runtime   | Comment
> 
> |  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
> {code}
> It seems that the "{{--skip-dir}}" option supported Yetus release prior to 
> 0.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17557) skip-dir option is not processed by Yetus

2021-03-08 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reassigned HADOOP-17557:
--

Assignee: Ahmed Hussein

> skip-dir option is not processed by Yetus
> -
>
> Key: HADOOP-17557
> URL: https://issues.apache.org/jira/browse/HADOOP-17557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit, yetus
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Running test patch locally does not work anymore after the Yetus upgrade
> {code:bash}
> dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
> patch-file.patch
> {code}
> Error is 
> {code:bash}
> Testing  patch on trunk.
> ERROR: Unprocessed flag(s): --skip-dir
> environment {
> SOURCEDIR = 'src'
> // will also need to change notification section below
> PATCHDIR = 'out'
> DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
> YETUS='yetus'
> // Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
> YETUS_VERSION='rel/0.13.0'
> /skip-
> # URL for user-side presentation in reports and such 
> to our artifacts
>  _ _ __
> |  ___|_ _(_) |_   _ _ __ ___| |
> | |_ / _` | | | | | | '__/ _ \ |
> |  _| (_| | | | |_| | | |  __/_|
> |_|  \__,_|_|_|\__,_|_|  \___(_)
> | Vote |Subsystem |  Runtime   | Comment
> 
> |  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
> {code}
> It seems that the "{{--skip-dir}}" option supported Yetus release prior to 
> 0.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17557) skip-dir option is not processed by Yetus

2021-03-08 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17557 started by Ahmed Hussein.
--
> skip-dir option is not processed by Yetus
> -
>
> Key: HADOOP-17557
> URL: https://issues.apache.org/jira/browse/HADOOP-17557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit, yetus
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> Running test patch locally does not work anymore after the Yetus upgrade
> {code:bash}
> dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
> patch-file.patch
> {code}
> Error is 
> {code:bash}
> Testing  patch on trunk.
> ERROR: Unprocessed flag(s): --skip-dir
> environment {
> SOURCEDIR = 'src'
> // will also need to change notification section below
> PATCHDIR = 'out'
> DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
> YETUS='yetus'
> // Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
> YETUS_VERSION='rel/0.13.0'
> /skip-
> # URL for user-side presentation in reports and such 
> to our artifacts
>  _ _ __
> |  ___|_ _(_) |_   _ _ __ ___| |
> | |_ / _` | | | | | | '__/ _ \ |
> |  _| (_| | | | |_| | | |  __/_|
> |_|  \__,_|_|_|\__,_|_|  \___(_)
> | Vote |Subsystem |  Runtime   | Comment
> 
> |  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
> {code}
> It seems that the "{{--skip-dir}}" option supported Yetus release prior to 
> 0.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17557) skip-dirs option is not processed by Yetus

2021-03-08 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17557:
---
Description: 
Running test patch locally does not work anymore after the Yetus upgrade


{code:bash}
dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
patch-file.patch
{code}

Error is 

{code:bash}
Testing  patch on trunk.
ERROR: Unprocessed flag(s): --skip-dir

environment {
SOURCEDIR = 'src'
// will also need to change notification section below
PATCHDIR = 'out'
DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
YETUS='yetus'
// Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
YETUS_VERSION='rel/0.13.0'
/skip-
# URL for user-side presentation in reports and such to 
our artifacts

 _ _ __
|  ___|_ _(_) |_   _ _ __ ___| |
| |_ / _` | | | | | | '__/ _ \ |
|  _| (_| | | | |_| | | |  __/_|
|_|  \__,_|_|_|\__,_|_|  \___(_)



| Vote |Subsystem |  Runtime   | Comment

|  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
{code}

It seems that the "{{--skip-dir}}" option supported Yetus release prior to 0.11.

  was:
Running test patch locally does not work anymore after the Yetus upgrade


{code:bash}
dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
patch-file.patch
{code}

Error is 

{code:bash}
Testing  patch on trunk.
ERROR: Unprocessed flag(s): --skip-dir

environment {
SOURCEDIR = 'src'
// will also need to change notification section below
PATCHDIR = 'out'
DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
YETUS='yetus'
// Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
YETUS_VERSION='rel/0.13.0'
/skip-
# URL for user-side presentation in reports and such to 
our artifacts

 _ _ __
|  ___|_ _(_) |_   _ _ __ ___| |
| |_ / _` | | | | | | '__/ _ \ |
|  _| (_| | | | |_| | | |  __/_|
|_|  \__,_|_|_|\__,_|_|  \___(_)



| Vote |Subsystem |  Runtime   | Comment

|  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
{code}

It seems that the "{{--skip-dir}}" option was never supported by any Yetus 
release.


> skip-dirs option is not processed by Yetus
> --
>
> Key: HADOOP-17557
> URL: https://issues.apache.org/jira/browse/HADOOP-17557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit, yetus
>Reporter: Ahmed Hussein
>Priority: Major
>
> Running test patch locally does not work anymore after the Yetus upgrade
> {code:bash}
> dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
> patch-file.patch
> {code}
> Error is 
> {code:bash}
> Testing  patch on trunk.
> ERROR: Unprocessed flag(s): --skip-dir
> environment {
> SOURCEDIR = 'src'
> // will also need to change notification section below
> PATCHDIR = 'out'
> DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
> YETUS='yetus'
> // Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
> YETUS_VERSION='rel/0.13.0'
> /skip-
> # URL for user-side presentation in reports and such 
> to our artifacts
>  _ _ __
> |  ___|_ _(_) |_   _ _ __ ___| |
> | |_ / _` | | | | | | '__/ _ \ |
> |  _| (_| | | | |_| | | |  __/_|
> |_|  \__,_|_|_|\__,_|_|  \___(_)
> | Vote |Subsystem |  Runtime   | Comment
> 
> |  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
> {code}
> It seems that the "{{--skip-dir}}" option supported Yetus release prior to 
> 0.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17557) skip-dir option is not processed by Yetus

2021-03-08 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17557?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17557:
---
Summary: skip-dir option is not processed by Yetus  (was: skip-dirs option 
is not processed by Yetus)

> skip-dir option is not processed by Yetus
> -
>
> Key: HADOOP-17557
> URL: https://issues.apache.org/jira/browse/HADOOP-17557
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, precommit, yetus
>Reporter: Ahmed Hussein
>Priority: Major
>
> Running test patch locally does not work anymore after the Yetus upgrade
> {code:bash}
> dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
> patch-file.patch
> {code}
> Error is 
> {code:bash}
> Testing  patch on trunk.
> ERROR: Unprocessed flag(s): --skip-dir
> environment {
> SOURCEDIR = 'src'
> // will also need to change notification section below
> PATCHDIR = 'out'
> DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
> YETUS='yetus'
> // Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
> YETUS_VERSION='rel/0.13.0'
> /skip-
> # URL for user-side presentation in reports and such 
> to our artifacts
>  _ _ __
> |  ___|_ _(_) |_   _ _ __ ___| |
> | |_ / _` | | | | | | '__/ _ \ |
> |  _| (_| | | | |_| | | |  __/_|
> |_|  \__,_|_|_|\__,_|_|  \___(_)
> | Vote |Subsystem |  Runtime   | Comment
> 
> |  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
> {code}
> It seems that the "{{--skip-dir}}" option supported Yetus release prior to 
> 0.11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17557) skip-dirs option is not processed by Yetus

2021-03-01 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17557:
--

 Summary: skip-dirs option is not processed by Yetus
 Key: HADOOP-17557
 URL: https://issues.apache.org/jira/browse/HADOOP-17557
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, precommit, yetus
Reporter: Ahmed Hussein


Running test patch locally does not work anymore after the Yetus upgrade


{code:bash}
dev-support/bin/test-patch --plugins="maven,checkstyle" --test-parallel=true 
patch-file.patch
{code}

Error is 

{code:bash}
Testing  patch on trunk.
ERROR: Unprocessed flag(s): --skip-dir

environment {
SOURCEDIR = 'src'
// will also need to change notification section below
PATCHDIR = 'out'
DOCKERFILE = "${SOURCEDIR}/dev-support/docker/Dockerfile"
YETUS='yetus'
// Branch or tag name.  Yetus release tags are 'rel/X.Y.Z'
YETUS_VERSION='rel/0.13.0'
/skip-
# URL for user-side presentation in reports and such to 
our artifacts

 _ _ __
|  ___|_ _(_) |_   _ _ __ ___| |
| |_ / _` | | | | | | '__/ _ \ |
|  _| (_| | | | |_| | | |  __/_|
|_|  \__,_|_|_|\__,_|_|  \___(_)



| Vote |Subsystem |  Runtime   | Comment

|  -1  |   yetus  |   0m 05s   | Unprocessed flag(s): --skip-dir
{code}

It seems that the "{{--skip-dir}}" option was never supported by any Yetus 
release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17541) Yetus does not run qbt-trunk

2021-02-22 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17541:
--

 Summary: Yetus does not run qbt-trunk
 Key: HADOOP-17541
 URL: https://issues.apache.org/jira/browse/HADOOP-17541
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin, build, yetus
Reporter: Ahmed Hussein


On Feb20th, qbt-reports started to generate empty reports

{code:bash}
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/424/
ERROR: File 'out/email-report.txt' does not exist
{code}

On Jenkins, the job fails with the following error:
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/425/console
{code:bash}
ERROR: 
/home/jenkins/jenkins-home/workspace/hadoop-qbt-trunk-java8-linux-x86_64//dev-support/bin/hadoop.sh
 does not exist.
Build step 'Execute shell' marked build as failure
Archiving artifacts
[Fast Archiver] No prior successful build to compare, so performing full copy 
of artifacts
Recording test results
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
{code}

[~aajisaka], I think this would be caused by HADOOP-16748 . I noticed that the 
PR of that HADOOP-16748 ceased from showing any reports, but for some reason I 
forgot about that while reviewing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17109) Replace Guava base64Url and base64 with Java8+ base64

2021-02-15 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17284838#comment-17284838
 ] 

Ahmed Hussein commented on HADOOP-17109:


After revisiting this Jira, I do not think {{org.apache.commons.}} Base64 
should be replaced.

PR  [#2703|https://github.com/apache/hadoop/pull/2703] is a straightforward to 
prevent importing guava.base64 in future commits.

The hadoop source code relies on {{org.apache.commons.}} for Base64.
This PR is to add the {{com.google.common.io.BaseEncoding}} to illegal classes 
in order to prevent using the guava import in future commits.
 * This PR only touches the checkstyle configuration.
 * There are no occurrences of {{com.google.common.io.BaseEncoding}} in the 
code.

 

> Replace Guava base64Url and base64 with Java8+ base64
> -
>
> Key: HADOOP-17109
> URL: https://issues.apache.org/jira/browse/HADOOP-17109
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> One important thing to not here as pointed out by [~jeagles] in [his comment 
> on the parent 
> task|https://issues.apache.org/jira/browse/HADOOP-17098?focusedCommentId=17147935=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17147935]
> {quote}One note to be careful about is that base64 translation is not a 
> standard, so the two implementations could produce different results. This 
> might matter in the case of serialization, persistence, or client server 
> different versions.{quote}
> *Base64Url:*
> {code:java}
> Targets
> Occurrences of 'base64Url' in project with mask '*.java'
> Found Occurrences  (6 usages found)
> org.apache.hadoop.mapreduce  (3 usages found)
> CryptoUtils.java  (3 usages found)
> wrapIfNecessary(Configuration, FSDataOutputStream, boolean)  (1 
> usage found)
> 138 + Base64.encodeBase64URLSafeString(iv) + "]");
> wrapIfNecessary(Configuration, InputStream, long)  (1 usage found)
> 183 + Base64.encodeBase64URLSafeString(iv) + "]");
> wrapIfNecessary(Configuration, FSDataInputStream)  (1 usage found)
> 218 + Base64.encodeBase64URLSafeString(iv) + "]");
> org.apache.hadoop.util  (2 usages found)
> KMSUtil.java  (2 usages found)
> toJSON(KeyVersion)  (1 usage found)
> 104 Base64.encodeBase64URLSafeString(
> toJSON(EncryptedKeyVersion)  (1 usage found)
> 117 
> .encodeBase64URLSafeString(encryptedKeyVersion.getEncryptedKeyIv()));
> org.apache.hadoop.yarn.server.resourcemanager.webapp  (1 usage found)
> TestRMWebServicesAppsModification.java  (1 usage found)
> testAppSubmit(String, String)  (1 usage found)
> 837 .put("test", 
> Base64.encodeBase64URLSafeString("value12".getBytes("UTF8")));
> {code}
> *Base64:*
> {code:java}
> Targets
> Occurrences of 'base64;' in project with mask '*.java'
> Found Occurrences  (51 usages found)
> org.apache.hadoop.crypto.key.kms  (1 usage found)
> KMSClientProvider.java  (1 usage found)
> 20 import org.apache.commons.codec.binary.Base64;
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMS.java  (1 usage found)
> 22 import org.apache.commons.codec.binary.Base64;
> org.apache.hadoop.fs  (2 usages found)
> XAttrCodec.java  (2 usages found)
> 23 import org.apache.commons.codec.binary.Base64;
> 56 BASE64;
> org.apache.hadoop.fs.azure  (3 usages found)
> AzureBlobStorageTestAccount.java  (1 usage found)
> 23 import com.microsoft.azure.storage.core.Base64;
> BlockBlobAppendStream.java  (1 usage found)
> 50 import org.apache.commons.codec.binary.Base64;
> ITestBlobDataValidation.java  (1 usage found)
> 50 import com.microsoft.azure.storage.core.Base64;
> org.apache.hadoop.fs.azurebfs  (2 usages found)
> AzureBlobFileSystemStore.java  (1 usage found)
> 99 import org.apache.hadoop.fs.azurebfs.utils.Base64;
> TestAbfsConfigurationFieldsValidation.java  (1 usage found)
> 34 import org.apache.hadoop.fs.azurebfs.utils.Base64;
> org.apache.hadoop.fs.azurebfs.diagnostics  (2 usages found)
> Base64StringConfigurationBasicValidator.java  (1 usage found)
> 26 import org.apache.hadoop.fs.azurebfs.utils.Base64;
> TestConfigurationValidators.java  (1 usage found)
> 25 import org.apache.hadoop.fs.azurebfs.utils.Base64;
> org.apache.hadoop.fs.azurebfs.extensions  (2 usages found)
> 

[jira] [Comment Edited] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs

2021-02-15 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17284037#comment-17284037
 ] 

Ahmed Hussein edited comment on HADOOP-16810 at 2/15/21, 3:47 PM:
--

[~aajisaka] I remembered you made some changes to Yetus/hadoop in the past. So, 
I thought to get your feedback on the changes in the PR.


In [my comment on 
MAPREDUCE-7079|https://issues.apache.org/jira/browse/MAPREDUCE-7079?focusedCommentId=17013234=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17013234]
{quote}This test case has been failing for ever.
 - When it timeout, MRAppMaster and some YarnChild processes remain running in 
the background. Therefore, the JVM running the tests fail due to OOM. No one 
notices that this unit test case has failed because the QA reports the unit 
tests that failed, but not timeout.
- It works for Mac OS X, but never works for Linux running on a virtual Box. It 
only works on the latter by disabling 
MRJobConfig.MR_ENCRYPTED_INTERMEDIATE_DATA.{quote}

In this PR:

- the {{DOCKER_EXTRAARGS}} are added to {{hadoop.sh}} to pass the random mount
- -the version 0.10.0 is not on the [release 
page|https://yetus.apache.org/downloads/]. So, this is upgrading the Yetus to a 
released version 0.13.0.-
- adding the mount parameter to {{start-build-env.sh}}

Resources:
* [Yetus Advanced Precommit - 
important-variables|https://yetus.apache.org/documentation/0.11.1/precommit-advanced/#important-variables]
* [DOCKER_EXTRAARGS usage in Yetus 
code|https://github.com/apache/yetus/search?q=DOCKER_EXTRAARGS]

We can try the new changes anyway as we are still dealing with the entropy 
problem.
CC: [~ebadger] [~ste...@apache.org]


was (Author: ahussein):
[~aajisaka] I remembered you made some changes to Yetus/hadoop in the past. So, 
I thought to get your feedback on the changes in the PR.


In [my comment on 
MAPREDUCE-7079|https://issues.apache.org/jira/browse/MAPREDUCE-7079?focusedCommentId=17013234=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17013234]
{quote}This test case has been failing for ever.
 - When it timeout, MRAppMaster and some YarnChild processes remain running in 
the background. Therefore, the JVM running the tests fail due to OOM. No one 
notices that this unit test case has failed because the QA reports the unit 
tests that failed, but not timeout.
- It works for Mac OS X, but never works for Linux running on a virtual Box. It 
only works on the latter by disabling 
MRJobConfig.MR_ENCRYPTED_INTERMEDIATE_DATA.{quote}

In this PR:

- the {{DOCKER_EXTRAARGS}} are added to {{hadoop.sh}} to pass the random mount
- the version 0.10.0 is not on the [release 
page|https://yetus.apache.org/downloads/]. So, this is upgrading the Yetus to a 
released version 0.13.0.
- adding the mount parameter to {{start-build-env.sh}}

Resources:
* [Yetus Advanced Precommit - 
important-variables|https://yetus.apache.org/documentation/0.11.1/precommit-advanced/#important-variables]
* [DOCKER_EXTRAARGS usage in Yetus 
code|https://github.com/apache/yetus/search?q=DOCKER_EXTRAARGS]

We can try the new changes anyway as we are still dealing with the entropy 
problem.
CC: [~ebadger] [~ste...@apache.org]

> Increase entropy to improve cryptographic randomness on precommit Linux VMs
> ---
>
> Key: HADOOP-16810
> URL: https://issues.apache.org/jira/browse/HADOOP-16810
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I was investigating a JUnit test (MAPREDUCE-7079 
> :TestMRIntermediateDataEncryption is failing in precommit builds) that was 
> consistently hanging on Linux VMs and failing Mapreduce pre-builds.
> I found that the test hangs slows or hangs indefinitely whenever Java reads 
> the random file.
> I explored two different ways to get that test case to work properly on my 
> local Linux VM running rel7:
> # To install "haveged" and "rng-tools" on the virtual machine running Rel7. 
> Then, start rngd service {{sudo service rngd start}} . This will fix the 
> problem for all the components on the image including java, native and any 
> other component.
> # Change java configuration to load urandom
> {code:bash}
> sudo vim $JAVA_HOME/jre/lib/security/java.security
> ## Change the line “securerandom.source=file:/dev/random” to read: 
> securerandom.source=file:/dev/./urandom
> {code}
> The first solution is better because this will fix the problem for everything 
> that requires SSL/TLS or other services that depend upon encryption.
> Since the precommit build runs on Docker, then it would be best to mount 
> 

[jira] [Assigned] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs

2021-02-12 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein reassigned HADOOP-16810:
--

Assignee: Ahmed Hussein

> Increase entropy to improve cryptographic randomness on precommit Linux VMs
> ---
>
> Key: HADOOP-16810
> URL: https://issues.apache.org/jira/browse/HADOOP-16810
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I was investigating a JUnit test (MAPREDUCE-7079 
> :TestMRIntermediateDataEncryption is failing in precommit builds) that was 
> consistently hanging on Linux VMs and failing Mapreduce pre-builds.
> I found that the test hangs slows or hangs indefinitely whenever Java reads 
> the random file.
> I explored two different ways to get that test case to work properly on my 
> local Linux VM running rel7:
> # To install "haveged" and "rng-tools" on the virtual machine running Rel7. 
> Then, start rngd service {{sudo service rngd start}} . This will fix the 
> problem for all the components on the image including java, native and any 
> other component.
> # Change java configuration to load urandom
> {code:bash}
> sudo vim $JAVA_HOME/jre/lib/security/java.security
> ## Change the line “securerandom.source=file:/dev/random” to read: 
> securerandom.source=file:/dev/./urandom
> {code}
> The first solution is better because this will fix the problem for everything 
> that requires SSL/TLS or other services that depend upon encryption.
> Since the precommit build runs on Docker, then it would be best to mount 
> {{/dev/urandom}} from the host as {{/dev/random}} into the container:
> {code:java}
> docker run -v /dev/urandom:/dev/random
> {code}
> For Yetus, we need to add the mount to the {{DOCKER_EXTRAARGS}} as follows:
> {code:java}
> DOCKER_EXTRAARGS+=("-v" "/dev/urandom:/dev/random")
> {code}
>  ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs

2021-02-12 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17284037#comment-17284037
 ] 

Ahmed Hussein commented on HADOOP-16810:


[~aajisaka] I remembered you made some changes to Yetus/hadoop in the past. So, 
I thought to get your feedback on the changes in the PR.


In [my comment on 
MAPREDUCE-7079|https://issues.apache.org/jira/browse/MAPREDUCE-7079?focusedCommentId=17013234=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17013234]
{quote}This test case has been failing for ever.
 - When it timeout, MRAppMaster and some YarnChild processes remain running in 
the background. Therefore, the JVM running the tests fail due to OOM. No one 
notices that this unit test case has failed because the QA reports the unit 
tests that failed, but not timeout.
- It works for Mac OS X, but never works for Linux running on a virtual Box. It 
only works on the latter by disabling 
MRJobConfig.MR_ENCRYPTED_INTERMEDIATE_DATA.{quote}

In this PR:

- the {{DOCKER_EXTRAARGS}} are added to {{hadoop.sh}} to pass the random mount
- the version 0.10.0 is not on the [release 
page|https://yetus.apache.org/downloads/]. So, this is upgrading the Yetus to a 
released version 0.13.0.
- adding the mount parameter to {{start-build-env.sh}}

Resources:
* [Yetus Advanced Precommit - 
important-variables|https://yetus.apache.org/documentation/0.11.1/precommit-advanced/#important-variables]
* [DOCKER_EXTRAARGS usage in Yetus 
code|https://github.com/apache/yetus/search?q=DOCKER_EXTRAARGS]

We can try the new changes anyway as we are still dealing with the entropy 
problem.
CC: [~ebadger] [~ste...@apache.org]

> Increase entropy to improve cryptographic randomness on precommit Linux VMs
> ---
>
> Key: HADOOP-16810
> URL: https://issues.apache.org/jira/browse/HADOOP-16810
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I was investigating a JUnit test (MAPREDUCE-7079 
> :TestMRIntermediateDataEncryption is failing in precommit builds) that was 
> consistently hanging on Linux VMs and failing Mapreduce pre-builds.
> I found that the test hangs slows or hangs indefinitely whenever Java reads 
> the random file.
> I explored two different ways to get that test case to work properly on my 
> local Linux VM running rel7:
> # To install "haveged" and "rng-tools" on the virtual machine running Rel7. 
> Then, start rngd service {{sudo service rngd start}} . This will fix the 
> problem for all the components on the image including java, native and any 
> other component.
> # Change java configuration to load urandom
> {code:bash}
> sudo vim $JAVA_HOME/jre/lib/security/java.security
> ## Change the line “securerandom.source=file:/dev/random” to read: 
> securerandom.source=file:/dev/./urandom
> {code}
> The first solution is better because this will fix the problem for everything 
> that requires SSL/TLS or other services that depend upon encryption.
> Since the precommit build runs on Docker, then it would be best to mount 
> {{/dev/urandom}} from the host as {{/dev/random}} into the container:
> {code:java}
> docker run -v /dev/urandom:/dev/random
> {code}
> For Yetus, we need to add the mount to the {{DOCKER_EXTRAARGS}} as follows:
> {code:java}
> DOCKER_EXTRAARGS+=("-v" "/dev/urandom:/dev/random")
> {code}
>  ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-16810) Increase entropy to improve cryptographic randomness on precommit Linux VMs

2021-02-12 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-16810 started by Ahmed Hussein.
--
> Increase entropy to improve cryptographic randomness on precommit Linux VMs
> ---
>
> Key: HADOOP-16810
> URL: https://issues.apache.org/jira/browse/HADOOP-16810
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I was investigating a JUnit test (MAPREDUCE-7079 
> :TestMRIntermediateDataEncryption is failing in precommit builds) that was 
> consistently hanging on Linux VMs and failing Mapreduce pre-builds.
> I found that the test hangs slows or hangs indefinitely whenever Java reads 
> the random file.
> I explored two different ways to get that test case to work properly on my 
> local Linux VM running rel7:
> # To install "haveged" and "rng-tools" on the virtual machine running Rel7. 
> Then, start rngd service {{sudo service rngd start}} . This will fix the 
> problem for all the components on the image including java, native and any 
> other component.
> # Change java configuration to load urandom
> {code:bash}
> sudo vim $JAVA_HOME/jre/lib/security/java.security
> ## Change the line “securerandom.source=file:/dev/random” to read: 
> securerandom.source=file:/dev/./urandom
> {code}
> The first solution is better because this will fix the problem for everything 
> that requires SSL/TLS or other services that depend upon encryption.
> Since the precommit build runs on Docker, then it would be best to mount 
> {{/dev/urandom}} from the host as {{/dev/random}} into the container:
> {code:java}
> docker run -v /dev/urandom:/dev/random
> {code}
> For Yetus, we need to add the mount to the {{DOCKER_EXTRAARGS}} as follows:
> {code:java}
> DOCKER_EXTRAARGS+=("-v" "/dev/urandom:/dev/random")
> {code}
>  ...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17513) Checkstyle IllegalImport does not catch guava imports

2021-02-09 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17281836#comment-17281836
 ] 

Ahmed Hussein commented on HADOOP-17513:


[~aajisaka], by the way, I noticed that running the build automatically replace 
guava imports with the third party import.

This is probably the reason why the wrong import has not been detected before 
merging in  YARN-10352  . 

I suggest that automatic import replacement gets removed from the build so that 
patches do not get changes without the developers' knowledge.

Personally, I will prefer that the build breaks locally rather than injecting 
automatic code changes.

CC: [~Jim_Brennan] was also confused when he had his patch modified during the 
build.

Let me know WDYT?

> Checkstyle IllegalImport does not catch guava imports
> -
>
> Key: HADOOP-17513
> URL: https://issues.apache.org/jira/browse/HADOOP-17513
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Although YARN-10352 introduces {{guava iterator import}}, it was committed to 
> trunk without checkstyle errors.
> According to [IllegalImportCheck#setIllegalPkgs 
> |https://github.com/checkstyle/checkstyle/blob/master/src/main/java/com/puppycrawl/tools/checkstyle/checks/imports/IllegalImportCheck.java],
>  the packages regex should be the prefix of the package. The code 
> automatically append {{\.*}} to the regex.
> CC: [~aajisaka]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17513) Checkstyle IllegalImport does not catch guava imports

2021-02-03 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17278433#comment-17278433
 ] 

Ahmed Hussein commented on HADOOP-17513:


Hey [~aajisaka], I think the checkstyle regex was not working correctly. 
Hopefully, in the future, this PR would help prevent avoiding guava imports 
before merging.

> Checkstyle IllegalImport does not catch guava imports
> -
>
> Key: HADOOP-17513
> URL: https://issues.apache.org/jira/browse/HADOOP-17513
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Although YARN-10352 introduces {{guava iterator import}}, it was committed to 
> trunk without checkstyle errors.
> According to [IllegalImportCheck#setIllegalPkgs 
> |https://github.com/checkstyle/checkstyle/blob/master/src/main/java/com/puppycrawl/tools/checkstyle/checks/imports/IllegalImportCheck.java],
>  the packages regex should be the prefix of the package. The code 
> automatically append {{\.*}} to the regex.
> CC: [~aajisaka]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   >