[jira] [Assigned] (HADOOP-17910) [JDK 17] TestNetUtils fails

2021-09-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17910:
--

Assignee: Viraj Jasani

> [JDK 17] TestNetUtils fails
> ---
>
> Key: HADOOP-17910
> URL: https://issues.apache.org/jira/browse/HADOOP-17910
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>
> TestNetUtils#testInvalidAddress fails.
> {noformat}
> [INFO] Running org.apache.hadoop.net.TestNetUtils
> [ERROR] Tests run: 48, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 4.469 s <<< FAILURE! - in org.apache.hadoop.net.TestNetUtils
> [ERROR] testInvalidAddress(org.apache.hadoop.net.TestNetUtils)  Time elapsed: 
> 0.386 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'invalid-test-host:0' but got unexpected exception: 
> java.net.UnknownHostException: invalid-test-host/:0
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:592)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:551)
>   at 
> org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:109)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>   at 
> org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:396)
>   at 
> org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:373)
>   at 
> org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:116)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> 

[jira] [Commented] (HADOOP-17910) [JDK 17] TestNetUtils fails

2021-09-15 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17415382#comment-17415382
 ] 

Akira Ajisaka commented on HADOOP-17910:


This issue is similar to HDFS-15685.

> [JDK 17] TestNetUtils fails
> ---
>
> Key: HADOOP-17910
> URL: https://issues.apache.org/jira/browse/HADOOP-17910
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> TestNetUtils#testInvalidAddress fails.
> {noformat}
> [INFO] Running org.apache.hadoop.net.TestNetUtils
> [ERROR] Tests run: 48, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 4.469 s <<< FAILURE! - in org.apache.hadoop.net.TestNetUtils
> [ERROR] testInvalidAddress(org.apache.hadoop.net.TestNetUtils)  Time elapsed: 
> 0.386 s  <<< FAILURE!
> java.lang.AssertionError: 
>  Expected to find 'invalid-test-host:0' but got unexpected exception: 
> java.net.UnknownHostException: invalid-test-host/:0
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:592)
>   at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:551)
>   at 
> org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:109)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>   at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
>   at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
>   at 
> org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:396)
>   at 
> org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:373)
>   at 
> org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:116)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:568)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> 

[jira] [Created] (HADOOP-17910) [JDK 17] TestNetUtils fails

2021-09-15 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17910:
--

 Summary: [JDK 17] TestNetUtils fails
 Key: HADOOP-17910
 URL: https://issues.apache.org/jira/browse/HADOOP-17910
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Akira Ajisaka


TestNetUtils#testInvalidAddress fails.
{noformat}
[INFO] Running org.apache.hadoop.net.TestNetUtils
[ERROR] Tests run: 48, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 4.469 
s <<< FAILURE! - in org.apache.hadoop.net.TestNetUtils
[ERROR] testInvalidAddress(org.apache.hadoop.net.TestNetUtils)  Time elapsed: 
0.386 s  <<< FAILURE!
java.lang.AssertionError: 
 Expected to find 'invalid-test-host:0' but got unexpected exception: 
java.net.UnknownHostException: invalid-test-host/:0
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:592)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:551)
at 
org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:109)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)

at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:396)
at 
org.apache.hadoop.test.GenericTestUtils.assertExceptionContains(GenericTestUtils.java:373)
at 
org.apache.hadoop.net.TestNetUtils.testInvalidAddress(TestNetUtils.java:116)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at 

[jira] [Updated] (HADOOP-17177) Java 17 support

2021-09-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17177:
---
Description: Umbrella JIRA to support Java 17 LTS.  (was: Umbrella JIRA to 
support the latest Java version.

Latest Java version:
* 14: From March 2020 to September 2020
* 15: From September 2020 to March 2021
* 16: From March 2021 to September 2021
* 17: From September 2021)
Summary: Java 17 support  (was: Java upstream support)

Java 17 has been released. Changed the title and description.

> Java 17 support
> ---
>
> Key: HADOOP-17177
> URL: https://issues.apache.org/jira/browse/HADOOP-17177
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> Umbrella JIRA to support Java 17 LTS.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17893) Improve PrometheusSink for Namenode and ResourceManager Metrics

2021-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17893:
---
Fix Version/s: (was: 3.4.0)

Removed Fix/versions. It is set by the committer when a patch is committed.

> Improve PrometheusSink for Namenode and ResourceManager Metrics
> ---
>
> Key: HADOOP-17893
> URL: https://issues.apache.org/jira/browse/HADOOP-17893
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.4.0
>Reporter: Max  Xie
>Assignee: Max  Xie
>Priority: Minor
> Attachments: HADOOP-17893.01.patch
>
>
> HADOOP-16398 added exporter for hadoop metrics to prometheus. But some of 
> metrics can't be exported  validly. For example like these metrics, 
> 1.  queue metrics for ResourceManager
> {code:java}
> queue_metrics_max_capacity{queue="root.queue1",context="yarn",hostname="rm_host1"}
>  1
> // queue2's metric can't be exported 
> queue_metrics_max_capacity{queue="root.queue2",context="yarn",hostname="rm_host1"}
>  2
> {code}
> It always exported  only one queue's metric because 
> PrometheusMetricsSink$metricLines only cache one metric  if theses metrics 
> have the same name no matter these metrics has different metric tags.
>  
> 2. rpc metrics for Namenode
> Namenode may have rpc metrics with multi port like service-rpc. But because  
> the same reason  as  Issue 1, it wiil lost some rpc metrics if we use 
> PrometheusSink.
> {code:java}
> rpc_rpc_queue_time300s90th_percentile_latency{port="9000",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> // rpc port=9005 metric can't be exported 
> rpc_rpc_queue_time300s90th_percentile_latency{port="9005",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> {code}
> 3. TopMetrics for Namenode
> org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics is a special 
> metric. And I think It is essentially a Summary metric type. TopMetrics 
> record name will according to different user and op ,  which means that these 
> metric will always exist in PrometheusMetricsSink$metricLines and it may 
> cause the risk of its memory leak. We e need to treat it special. 
> {code:java}
> // invaild topmetric export
> # TYPE 
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count
>  counter
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10
> // it should be 
> # TYPE nn_top_user_op_counts_window_ms_150_count counter
> nn_top_user_op_counts_window_ms_150_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17893) Improve PrometheusSink for Namenode and ResourceManager Metrics

2021-09-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17412428#comment-17412428
 ] 

Akira Ajisaka commented on HADOOP-17893:


Hi [~max2049], there are some conflicts after HADOOP-17893. Would you rebase 
the patch?

> Improve PrometheusSink for Namenode and ResourceManager Metrics
> ---
>
> Key: HADOOP-17893
> URL: https://issues.apache.org/jira/browse/HADOOP-17893
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.4.0
>Reporter: Max  Xie
>Assignee: Max  Xie
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HADOOP-17893.01.patch
>
>
> HADOOP-16398 added exporter for hadoop metrics to prometheus. But some of 
> metrics can't be exported  validly. For example like these metrics, 
> 1.  queue metrics for ResourceManager
> {code:java}
> queue_metrics_max_capacity{queue="root.queue1",context="yarn",hostname="rm_host1"}
>  1
> // queue2's metric can't be exported 
> queue_metrics_max_capacity{queue="root.queue2",context="yarn",hostname="rm_host1"}
>  2
> {code}
> It always exported  only one queue's metric because 
> PrometheusMetricsSink$metricLines only cache one metric  if theses metrics 
> have the same name no matter these metrics has different metric tags.
>  
> 2. rpc metrics for Namenode
> Namenode may have rpc metrics with multi port like service-rpc. But because  
> the same reason  as  Issue 1, it wiil lost some rpc metrics if we use 
> PrometheusSink.
> {code:java}
> rpc_rpc_queue_time300s90th_percentile_latency{port="9000",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> // rpc port=9005 metric can't be exported 
> rpc_rpc_queue_time300s90th_percentile_latency{port="9005",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> {code}
> 3. TopMetrics for Namenode
> org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics is a special 
> metric. And I think It is essentially a Summary metric type. TopMetrics 
> record name will according to different user and op ,  which means that these 
> metric will always exist in PrometheusMetricsSink$metricLines and it may 
> cause the risk of its memory leak. We e need to treat it special. 
> {code:java}
> // invaild topmetric export
> # TYPE 
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count
>  counter
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10
> // it should be 
> # TYPE nn_top_user_op_counts_window_ms_150_count counter
> nn_top_user_op_counts_window_ms_150_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17893) Improve PrometheusSink for Namenode and ResourceManager Metrics

2021-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17893:
--

Assignee: Max  Xie

> Improve PrometheusSink for Namenode and ResourceManager Metrics
> ---
>
> Key: HADOOP-17893
> URL: https://issues.apache.org/jira/browse/HADOOP-17893
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.4.0
>Reporter: Max  Xie
>Assignee: Max  Xie
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HADOOP-17893.01.patch
>
>
> HADOOP-16398 added exporter for hadoop metrics to prometheus. But some of 
> metrics can't be exported  validly. For example like these metrics, 
> 1.  queue metrics for ResourceManager
> {code:java}
> queue_metrics_max_capacity{queue="root.queue1",context="yarn",hostname="rm_host1"}
>  1
> // queue2's metric can't be exported 
> queue_metrics_max_capacity{queue="root.queue2",context="yarn",hostname="rm_host1"}
>  2
> {code}
> It always exported  only one queue's metric because 
> PrometheusMetricsSink$metricLines only cache one metric  if theses metrics 
> have the same name no matter these metrics has different metric tags.
>  
> 2. rpc metrics for Namenode
> Namenode may have rpc metrics with multi port like service-rpc. But because  
> the same reason  as  Issue 1, it wiil lost some rpc metrics if we use 
> PrometheusSink.
> {code:java}
> rpc_rpc_queue_time300s90th_percentile_latency{port="9000",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> // rpc port=9005 metric can't be exported 
> rpc_rpc_queue_time300s90th_percentile_latency{port="9005",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> {code}
> 3. TopMetrics for Namenode
> org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics is a special 
> metric. And I think It is essentially a Summary metric type. TopMetrics 
> record name will according to different user and op ,  which means that these 
> metric will always exist in PrometheusMetricsSink$metricLines and it may 
> cause the risk of its memory leak. We e need to treat it special. 
> {code:java}
> // invaild topmetric export
> # TYPE 
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count
>  counter
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10
> // it should be 
> # TYPE nn_top_user_op_counts_window_ms_150_count counter
> nn_top_user_op_counts_window_ms_150_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17804) Prometheus metrics only include the last set of labels

2021-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17804:
--

Assignee: Adam Binford

> Prometheus metrics only include the last set of labels
> --
>
> Key: HADOOP-17804
> URL: https://issues.apache.org/jira/browse/HADOOP-17804
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Adam Binford
>Assignee: Adam Binford
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> A prometheus endpoint was added in 
> https://issues.apache.org/jira/browse/HADOOP-16398, but the logic that puts 
> them into a map based on the "key" incorrectly hides any metrics with the 
> same key but different labels. The relevant code is here: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java#L55|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java#L55.]
> The labels/tags need to be taken into account, as different tags mean 
> different metrics. For example, I came across this while trying to scrape 
> metrics for all the queues in our scheduler. Only the last queue is included 
> because all the metrics have the same "key" but a different "queue" label/tag.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17804) Prometheus metrics only include the last set of labels

2021-09-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17804.

Fix Version/s: 3.3.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk and branch-3.3. Thanks [~Kimahriman] for the contribution!

> Prometheus metrics only include the last set of labels
> --
>
> Key: HADOOP-17804
> URL: https://issues.apache.org/jira/browse/HADOOP-17804
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.1
>Reporter: Adam Binford
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> A prometheus endpoint was added in 
> https://issues.apache.org/jira/browse/HADOOP-16398, but the logic that puts 
> them into a map based on the "key" incorrectly hides any metrics with the 
> same key but different labels. The relevant code is here: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java#L55|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/PrometheusMetricsSink.java#L55.]
> The labels/tags need to be taken into account, as different tags mean 
> different metrics. For example, I came across this while trying to scrape 
> metrics for all the queues in our scheduler. Only the last queue is included 
> because all the metrics have the same "key" but a different "queue" label/tag.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17893) Improve PrometheusSink for Namenode and ResourceManager Metrics

2021-09-05 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17410265#comment-17410265
 ] 

Akira Ajisaka commented on HADOOP-17893:


Thank you [~Max Xie] for the report and the patch. It seems the first issue is 
covered in HADOOP-17804. Would you check the issue and the PR?

> Improve PrometheusSink for Namenode and ResourceManager Metrics
> ---
>
> Key: HADOOP-17893
> URL: https://issues.apache.org/jira/browse/HADOOP-17893
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.4.0
>Reporter: Max  Xie
>Priority: Minor
> Fix For: 3.4.0
>
> Attachments: HADOOP-17893.01.patch
>
>
> HADOOP-16398 added exporter for hadoop metrics to prometheus. But some of 
> metrics can't be exported  validly. For example like these metrics, 
> 1.  queue metrics for ResourceManager
> {code:java}
> queue_metrics_max_capacity{queue="root.queue1",context="yarn",hostname="rm_host1"}
>  1
> // queue2's metric can't be exported 
> queue_metrics_max_capacity{queue="root.queue2",context="yarn",hostname="rm_host1"}
>  2
> {code}
> It always exported  only one queue's metric because 
> PrometheusMetricsSink$metricLines only cache one metric  if theses metrics 
> have the same name no matter these metrics has different metric tags.
>  
> 2. rpc metrics for Namenode
> Namenode may have rpc metrics with multi port like service-rpc. But because  
> the same reason  as  Issue 1, it wiil lost some rpc metrics if we use 
> PrometheusSink.
> {code:java}
> rpc_rpc_queue_time300s90th_percentile_latency{port="9000",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> // rpc port=9005 metric can't be exported 
> rpc_rpc_queue_time300s90th_percentile_latency{port="9005",servername="ClientNamenodeProtocol",context="rpc",hostname="nnhost"}
>  0
> {code}
> 3. TopMetrics for Namenode
> org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics is a special 
> metric. And I think It is essentially a Summary metric type. TopMetrics 
> record name will according to different user and op ,  which means that these 
> metric will always exist in PrometheusMetricsSink$metricLines and it may 
> cause the risk of its memory leak. We e need to treat it special. 
> {code:java}
> // invaild topmetric export
> # TYPE 
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count
>  counter
> nn_top_user_op_counts_window_ms_150_op_safemode_get_user_hadoop_client_ip_test_com_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10
> // it should be 
> # TYPE nn_top_user_op_counts_window_ms_150_count counter
> nn_top_user_op_counts_window_ms_150_count{context="dfs",hostname="nn_host",op="safemode_get",user="hadoop/client...@test.com"}
>  10{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17874) ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner

2021-09-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17874:
---
Fix Version/s: 3.2.4
   3.3.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.3, and branch-3.2. Thank you [~vjasani]!

> ExceptionsHandler to add terse/suppressed Exceptions in thread-safe manner
> --
>
> Key: HADOOP-17874
> URL: https://issues.apache.org/jira/browse/HADOOP-17874
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Even though we have explicit comments stating that we have thread-safe 
> replacement of terseExceptions and suppressedExceptions, in reality we don't 
> have it. As we can't guarantee only non-concurrent addition of Exceptions at 
> a time from any Server implementation, we should make this thread-safe.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17886) Upgrade ant to 1.10.11

2021-09-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17886:
---
Fix Version/s: (was: 3.2.4)
   3.2.3

Cherry-picked to branch-3.2.3. Thank you [~ahussein] and [~jeagles]

> Upgrade ant to 1.10.11
> --
>
> Key: HADOOP-17886
> URL: https://issues.apache.org/jira/browse/HADOOP-17886
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0, 3.2.2, 3.4.0, 2.10.2
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Vulnerabilities reported in org.apache.ant:ant:1.10.9
>  * [CVE-2021-36374|https://nvd.nist.gov/vuln/detail/CVE-2021-36374] moderate 
> severity
>  * [CVE-2021-36373|https://nvd.nist.gov/vuln/detail/CVE-2021-36373] moderate 
> severity
> suggested: org.apache.ant:ant ~> 1.10.11



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.13 and httpcore to 4.4.13

2021-09-01 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17223:
---
   Fix Version/s: 2.10.2
Target Version/s: 3.3.1, 3.2.2, 3.4.0  (was: 3.2.2, 3.3.1, 3.4.0)

Backported to branch-2.10

> update  org.apache.httpcomponents:httpclient to 4.5.13 and httpcore to 4.4.13
> -
>
> Key: HADOOP-17223
> URL: https://issues.apache.org/jira/browse/HADOOP-17223
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.0
>Reporter: Pranav Bheda
>Assignee: Pranav Bheda
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1, 3.4.0, 2.10.2
>
> Attachments: HADOOP-17223.001.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Update the dependencies
>  * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12
>  * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15584) move httpcomponents version in pom.xml

2021-09-01 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17408033#comment-17408033
 ] 

Akira Ajisaka commented on HADOOP-15584:


Backported to branch-2.10.

> move httpcomponents version in pom.xml
> --
>
> Key: HADOOP-15584
> URL: https://issues.apache.org/jira/browse/HADOOP-15584
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.3
>Reporter: Brandon Scheller
>Assignee: Brandon Scheller
>Priority: Minor
> Fix For: 3.2.0, 2.10.2
>
> Attachments: 397.patch
>
>
> Move httpcomponents version to their own config.
> By moving httpcomponent versions in pom.xml to their own variables this will 
> allow for easy overriding



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15584) move httpcomponents version in pom.xml

2021-09-01 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15584:
---
Fix Version/s: 2.10.2

> move httpcomponents version in pom.xml
> --
>
> Key: HADOOP-15584
> URL: https://issues.apache.org/jira/browse/HADOOP-15584
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.3
>Reporter: Brandon Scheller
>Assignee: Brandon Scheller
>Priority: Minor
> Fix For: 3.2.0, 2.10.2
>
> Attachments: 397.patch
>
>
> Move httpcomponents version to their own config.
> By moving httpcomponent versions in pom.xml to their own variables this will 
> allow for easy overriding



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17544) Mark KeyProvider as Stable

2021-08-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17544:
---
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Mark KeyProvider as Stable
> --
>
> Key: HADOOP-17544
> URL: https://issues.apache.org/jira/browse/HADOOP-17544
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Now, o.a.h.crypto.key.KeyProvider.java is marked Public and Unstable. I think 
> the class is very stable, and it should be annotated as Stable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2021-08-25 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17404429#comment-17404429
 ] 

Akira Ajisaka commented on HADOOP-14693:


If we are going to create JUnit 5 based KerberosSecurityTestCase, I think it 
should be placed in the test directory.

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2021-08-25 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17404427#comment-17404427
 ] 

Akira Ajisaka commented on HADOOP-14693:


Hi [~smeng] I appreciated that you are trying to upgrade JUnit in 
[https://github.com/apache/hadoop/pull/3316]

I tried the plugin in some other modules and found a problem:
 * KerberosSecurityTestCase in hadoop-minikdc module has JUnit 4 dependency in 
compile (not test) scope, and it is used from many modules. Therefore we need 
to upgrade all the unit tests that extend KerberosSecurityTestCase at once. To 
mitigate the situation, we can create JUnit 5 based KerberosSecurityTestCase so 
that we don't need to upgrade the unit tests at the same time. There are many 
other test utils, and most of them have the same problem.

Hi [~ste...@apache.org] I'm +1 for gradually moving to AssertJ. The error 
message in AssertJ is more verbose and it has many useful methods.

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15939) Filter overlapping objenesis class in hadoop-client-minicluster

2021-08-25 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15939:
---
Fix Version/s: 3.2.3

Backported to branch-3.2 and branch-3.2.3.

> Filter overlapping objenesis class in hadoop-client-minicluster 
> 
>
> Key: HADOOP-15939
> URL: https://issues.apache.org/jira/browse/HADOOP-15939
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 3.3.0, 3.2.3
>
> Attachments: HADOOP-15939.001.patch
>
>
> As mentioned here and found in with latest Jenkins shadedclient.
> Jenkins does not provide a detailed output file for the failure though. But 
> it can be reproed with the following cmd:
> {code:java}
> mvn verify -fae --batch-mode -am -pl 
> hadoop-client-modules/hadoop-client-check-invariants -pl 
> hadoop-client-modules/hadoop-client-check-test-invariants -pl 
> hadoop-client-modules/hadoop-client-integration-tests -Dtest=NoUnitTests 
> -Dmaven.javadoc.skip=true -Dcheckstyle.skip=true -Dfindbugs.skip=true
> {code}
> Error Message:
> {code:java}
> [WARNING] objenesis-1.0.jar, mockito-all-1.8.5.jar define 30 overlapping 
> classes: 
> [WARNING]   - org.objenesis.ObjenesisBase
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiator
> [WARNING]   - org.objenesis.ObjenesisHelper
> [WARNING]   - org.objenesis.instantiator.jrockit.JRockitLegacyInstantiator
> [WARNING]   - org.objenesis.instantiator.sun.SunReflectionFactoryInstantiator
> [WARNING]   - org.objenesis.instantiator.ObjectInstantiator
> [WARNING]   - org.objenesis.instantiator.gcj.GCJInstantiatorBase$DummyStream
> [WARNING]   - org.objenesis.instantiator.basic.ObjectStreamClassInstantiator
> [WARNING]   - org.objenesis.ObjenesisException
> [WARNING]   - org.objenesis.Objenesis
> [WARNING]   - 20 more...
> [WARNING] maven-shade-plugin has detected that some class files are
> [WARNING] present in two or more JARs. When this happens, only one
> [WARNING] single version of the class is copied to the uber jar.
> [WARNING] Usually this is not harmful and you can skip these warnings,
> [WARNING] otherwise try to manually exclude artifacts based on
> [WARNING] mvn dependency:tree -Ddetail=true and the above output.
> [WARNING] See [http://maven.apache.org/plugins/maven-shade-plugin/]
> [INFO] Replacing original artifact with shaded artifact.
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17821) Move Ozone to related projects section

2021-08-20 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17821.

Fix Version/s: asf-site
   Resolution: Fixed

> Move Ozone to related projects section
> --
>
> Key: HADOOP-17821
> URL: https://issues.apache.org/jira/browse/HADOOP-17821
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Major
>  Labels: pull-request-available
> Fix For: asf-site
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Hi all, as Ozone was spun to TLP, it has individual web site.
> Now on Modules part of Hadoop [website|https://hadoop.apache.org/], the link 
> of Ozone website is old page.
> IMHO there are two ways to fix it :
> 1. update it to new page.
> 2. move Ozone to Related projects part on Hadoop website
> Please feel free to give me some feedback, thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17850) Upgrade ZooKeeper to 3.4.14 in branch-3.2

2021-08-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17850:
---
Fix Version/s: 3.2.3
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to branch-3.2 and branch-3.2.3.

> Upgrade ZooKeeper to 3.4.14 in branch-3.2
> -
>
> Key: HADOOP-17850
> URL: https://issues.apache.org/jira/browse/HADOOP-17850
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.2
>Reporter: Akira Ajisaka
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Upgrade ZooKeeper 3.4.14 to fix CVE-2019-0201 
> (https://zookeeper.apache.org/security.html). That way the ZooKeeper version 
> will be consistent with BigTop 3.0.0 (BIGTOP-3471).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17849) Exclude spotbugs-annotations from transitive dependencies on branch-3.2

2021-08-16 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399676#comment-17399676
 ] 

Akira Ajisaka commented on HADOOP-17849:


Filed HADOOP-17850 to upgrade ZooKeeper version in branch-3.2.

> Exclude spotbugs-annotations from transitive dependencies on branch-3.2
> ---
>
> Key: HADOOP-17849
> URL: https://issues.apache.org/jira/browse/HADOOP-17849
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.2
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Building Hadoop in dist profile with ZooKeeper 3.4.14 fails on 
> hadoop-client-check-test-invariants. Excluding 
> com.github.spotbugs:spotbugs-annotation from transitive dependencies should 
> fix this for users needing zookeeer-3.4.14. Since the dependency is 
> provided/optional on ZooKeeper 3.5.x, branch-3.3 and above are not affected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17850) Upgrade ZooKeeper to 3.4.14 in branch-3.2

2021-08-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17850:
---
 Target Version/s: 3.2.3
Affects Version/s: 3.2.2

> Upgrade ZooKeeper to 3.4.14 in branch-3.2
> -
>
> Key: HADOOP-17850
> URL: https://issues.apache.org/jira/browse/HADOOP-17850
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.2
>Reporter: Akira Ajisaka
>Priority: Major
>
> Upgrade ZooKeeper 3.4.14 to fix CVE-2019-0201 
> (https://zookeeper.apache.org/security.html). That way the ZooKeeper version 
> will be consistent with BigTop 3.0.0 (BIGTOP-3471).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17850) Upgrade ZooKeeper to 3.4.14 in branch-3.2

2021-08-16 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17850:
--

 Summary: Upgrade ZooKeeper to 3.4.14 in branch-3.2
 Key: HADOOP-17850
 URL: https://issues.apache.org/jira/browse/HADOOP-17850
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Akira Ajisaka


Upgrade ZooKeeper 3.4.14 to fix CVE-2019-0201 
(https://zookeeper.apache.org/security.html). That way the ZooKeeper version 
will be consistent with BigTop 3.0.0 (BIGTOP-3471).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17849) Exclude spotbugs-annotations from transitive dependencies on branch-3.2

2021-08-16 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399672#comment-17399672
 ] 

Akira Ajisaka commented on HADOOP-17849:


Now ZooKeeper 3.4.13 is used in branch-3.2. I suppose we need to upgrade the 
version to 3.4.14 due to CVE-2019-0201 
(https://zookeeper.apache.org/security.html).

> Exclude spotbugs-annotations from transitive dependencies on branch-3.2
> ---
>
> Key: HADOOP-17849
> URL: https://issues.apache.org/jira/browse/HADOOP-17849
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.2
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Building Hadoop in dist profile with ZooKeeper 3.4.14 fails on 
> hadoop-client-check-test-invariants. Excluding 
> com.github.spotbugs:spotbugs-annotation from transitive dependencies should 
> fix this for users needing zookeeer-3.4.14. Since the dependency is 
> provided/optional on ZooKeeper 3.5.x, branch-3.3 and above are not affected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17834:
---
Fix Version/s: 3.3.2

Backported to branch-3.3.

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14693) Upgrade JUnit from 4 to 5

2021-08-14 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17399186#comment-17399186
 ] 

Akira Ajisaka commented on HADOOP-14693:


Thank you [~smeng]. I tried the plugin and it worked: 
https://github.com/apache/hadoop/pull/3304
Though the number of whitespaces for tab is not correct, I think we can save a 
lot of time with the plugin. Also, I think we can pass our checkstyle rule to 
the plugin to fix the whitespace issue.

> Upgrade JUnit from 4 to 5
> -
>
> Key: HADOOP-14693
> URL: https://issues.apache.org/jira/browse/HADOOP-14693
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> JUnit 4 does not support Java 9. We need to upgrade this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17834) Bump aliyun-sdk-oss to 3.13.0

2021-08-14 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17834:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged into trunk.

> Bump aliyun-sdk-oss to 3.13.0
> -
>
> Key: HADOOP-17834
> URL: https://issues.apache.org/jira/browse/HADOOP-17834
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Bump aliyun-sdk-oss to 3.13.0 in order to remove transitive dependency on 
> jdom 1.1.
> Ref: 
> https://issues.apache.org/jira/browse/HADOOP-17820?focusedCommentId=17390206=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17390206.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17799) Improve the GitHub pull request template

2021-08-14 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17799.

Fix Version/s: 3.4.0
   Resolution: Fixed

Committed to trunk.

> Improve the GitHub pull request template
> 
>
> Key: HADOOP-17799
> URL: https://issues.apache.org/jira/browse/HADOOP-17799
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The current Hadoop pull request template can be improved.
> - Require some information (e.g. 
> https://github.com/apache/spark/blob/master/.github/PULL_REQUEST_TEMPLATE)
> - Checklists (e.g. 
> https://github.com/apache/nifi/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
> - Move current notice to comment (i.e. surround with )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17844) Upgrade JSON smart to 2.4.7

2021-08-14 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17844.

Fix Version/s: 3.3.2
   3.2.3
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk, branch-3.3, branch-3.2, and branch-3.2.3.

> Upgrade JSON smart to 2.4.7
> ---
>
> Key: HADOOP-17844
> URL: https://issues.apache.org/jira/browse/HADOOP-17844
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently we are using JSON Smart 2.4.2 version which is vulnerable to - 
> CVE-2021-31684.
> We can upgrade the version to 2.4.7 (2.4.5 or later).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17799) Improve the GitHub pull request template

2021-08-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17799:
--

Assignee: Akira Ajisaka

> Improve the GitHub pull request template
> 
>
> Key: HADOOP-17799
> URL: https://issues.apache.org/jira/browse/HADOOP-17799
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> The current Hadoop pull request template can be improved.
> - Require some information (e.g. 
> https://github.com/apache/spark/blob/master/.github/PULL_REQUEST_TEMPLATE)
> - Checklists (e.g. 
> https://github.com/apache/nifi/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
> - Move current notice to comment (i.e. surround with )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17370.

Fix Version/s: 3.3.2
   3.2.3
   2.10.2
   3.4.0
   Resolution: Fixed

Committed to all the active branches.

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17838) Update link of PoweredBy wiki page

2021-08-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17838.

Fix Version/s: asf-site
 Hadoop Flags: Reviewed
   Resolution: Fixed

Merged the PR.

> Update link of PoweredBy wiki page
> --
>
> Key: HADOOP-17838
> URL: https://issues.apache.org/jira/browse/HADOOP-17838
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Trivial
>  Labels: pull-request-available
> Fix For: asf-site
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The [PoweredBy wiki 
> page|https://cwiki.apache.org/confluence/display/hadoop/PoweredBy] on [main 
> page|https://hadoop.apache.org/] is not found.
> IMHO update it to 
> [here|https://cwiki.apache.org/confluence/display/HADOOP2/PoweredBy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17838) Update link of PoweredBy wiki page

2021-08-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17838:
---
Issue Type: Bug  (was: Improvement)
  Priority: Minor  (was: Trivial)

> Update link of PoweredBy wiki page
> --
>
> Key: HADOOP-17838
> URL: https://issues.apache.org/jira/browse/HADOOP-17838
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Minor
>  Labels: pull-request-available
> Fix For: asf-site
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The [PoweredBy wiki 
> page|https://cwiki.apache.org/confluence/display/hadoop/PoweredBy] on [main 
> page|https://hadoop.apache.org/] is not found.
> IMHO update it to 
> [here|https://cwiki.apache.org/confluence/display/HADOOP2/PoweredBy]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17835) Use CuratorCache implementation instead of PathChildrenCache / TreeCache

2021-08-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17835:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Use CuratorCache implementation instead of PathChildrenCache / TreeCache
> 
>
> Key: HADOOP-17835
> URL: https://issues.apache.org/jira/browse/HADOOP-17835
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> As we have moved to Curator 5.2.0 for Hadoop 3.4.0, we should start using new 
> CuratorCache service implementation in place of deprecated PathChildrenCache 
> and TreeCache usecases.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17370:
--

Assignee: Akira Ajisaka

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-05 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17370:
---
Target Version/s: 3.4.0, 2.10.2, 3.2.3, 3.3.2

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-05 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17394537#comment-17394537
 ] 

Akira Ajisaka commented on HADOOP-17370:


Now the latest version is 1.21 and several CVEs are fixed. Let's upgrade.
- https://nvd.nist.gov/vuln/detail/CVE-2021-35515
- https://nvd.nist.gov/vuln/detail/CVE-2021-35516
- https://nvd.nist.gov/vuln/detail/CVE-2021-36090

 

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17370) Upgrade commons-compress to 1.21

2021-08-05 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17370:
---
Summary: Upgrade commons-compress to 1.21  (was: Upgrade commons-compress 
to 1.20)

> Upgrade commons-compress to 1.21
> 
>
> Key: HADOOP-17370
> URL: https://issues.apache.org/jira/browse/HADOOP-17370
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Dongjoon Hyun
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2021-08-04 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
   Fix Version/s: 3.2.3
Target Version/s: 3.3.1, 3.2.2, 3.4.0  (was: 3.2.2, 3.3.1, 3.4.0)

Backported to branch-3.2.

> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: Ubuntu 18.04, CentOS 8
> Snappy 1.1.7
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> 

[jira] [Commented] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-08-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391981#comment-17391981
 ] 

Akira Ajisaka commented on HADOOP-17612:


{quote}Edit: I can create follow up Jira to clean up deprecated usage of 
PathChildrenCache in ZKDelegationTokenSecretManager 
(ZKDelegationTokenSecretManager seems to be in critical path of Web 
AuthenticationFilter).
{quote}
+1. Go ahead!

> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Let's upgrade Zookeeper and Curator to 3.6.3 and 5.2.0 respectively.
> Curator 5.2 also supports Zookeeper 3.5 servers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-08-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17612:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> Let's upgrade Zookeeper and Curator to 3.6.3 and 5.2.0 respectively.
> Curator 5.2 also supports Zookeeper 3.5 servers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16206) Migrate from Log4j1 to Log4j2

2021-08-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391481#comment-17391481
 ] 

Akira Ajisaka commented on HADOOP-16206:


{quote}So here for me, the only choice is to do breaking change, like what I 
have done in HBase. If we all agree, I still try to go with this direction.
{quote}
+1

> Migrate from Log4j1 to Log4j2
> -
>
> Key: HADOOP-16206
> URL: https://issues.apache.org/jira/browse/HADOOP-16206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.0
>Reporter: Akira Ajisaka
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HADOOP-16206-wip.001.patch
>
>
> This sub-task is to remove log4j1 dependency and add log4j2 dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17831) Upgrade log4j to fix critical vulnerability

2021-08-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17391480#comment-17391480
 ] 

Akira Ajisaka commented on HADOOP-17831:


It seems duplicate of HADOOP-16206. Can I close this?

> Upgrade log4j to fix critical vulnerability
> ---
>
> Key: HADOOP-17831
> URL: https://issues.apache.org/jira/browse/HADOOP-17831
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.3.1
>Reporter: Brisly Priya Joseph
>Priority: Major
>
> CVE-2019-17571 - log4j-1.2.17 - (Fix available in log4j-2.8.2)
> Please upgrade to log4j-2.8.2 to fix vulnerability



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17820) Remove dependency on jdom

2021-07-29 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390228#comment-17390228
 ] 

Akira Ajisaka commented on HADOOP-17820:


jdom 1 can be removed by upgrading aliyun-sdk-oss: 
https://github.com/aliyun/aliyun-oss-java-sdk/blob/3.13.0/pom.xml#L27-L29

> Remove dependency on jdom
> -
>
> Key: HADOOP-17820
> URL: https://issues.apache.org/jira/browse/HADOOP-17820
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>
> It doesn't seem that jdom is referenced anywhere in the code base now, yet it 
> exists in the distribution.
> {code}
> $ find . -name "*jdom*.jar"
> ./hadoop-3.4.0-SNAPSHOT/share/hadoop/tools/lib/jdom-1.1.jar
> {code}
> There is recently 
> [CVE-2021-33813|https://github.com/advisories/GHSA-2363-cqg2-863c] issued for 
> jdom. Let's remove the binary from the dist if not useful.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17813) Checkstyle - Allow line length: 100

2021-07-23 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17813:
---
Summary: Checkstyle - Allow line length: 100  (was: Allow line length more 
than 80 characters)

> Checkstyle - Allow line length: 100
> ---
>
> Key: HADOOP-17813
> URL: https://issues.apache.org/jira/browse/HADOOP-17813
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Update the checkstyle rule to allow for 100 or 120 characters.
> Discussion thread: 
> [https://lists.apache.org/thread.html/r69c363fb365d4cfdec44433e7f6ec7d7eb3505067c2fcb793765068f%40%3Ccommon-dev.hadoop.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17317) [JDK 11] Upgrade dnsjava to remove illegal access warnings

2021-07-23 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17317:
---
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk.

> [JDK 11] Upgrade dnsjava to remove illegal access warnings
> --
>
> Key: HADOOP-17317
> URL: https://issues.apache.org/jira/browse/HADOOP-17317
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Originally reported by [~kihwal] in 
> https://issues.apache.org/jira/browse/HADOOP-15338?focusedCommentId=17129854=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17129854
> When running FsShell commands, there are some warning messages as follows:
> {noformat}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by org.xbill.DNS.ResolverConfig  to method 
> sun.net.dns.ResolverConfiguration.open()
> WARNING: Please consider reporting this to the maintainers of 
> org.xbill.DNS.ResolverConfig
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17813) Allow line length more than 80 characters

2021-07-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17813:
---
Description: 
Update the checkstyle rule to allow for 100 or 120 characters.

Discussion thread: 
[https://lists.apache.org/thread.html/r69c363fb365d4cfdec44433e7f6ec7d7eb3505067c2fcb793765068f%40%3Ccommon-dev.hadoop.apache.org%3E]

  was:
Update the checkstyle definition to allow for 100 or 120 characters.

Discussion thread: 
https://lists.apache.org/thread.html/r69c363fb365d4cfdec44433e7f6ec7d7eb3505067c2fcb793765068f%40%3Ccommon-dev.hadoop.apache.org%3E


> Allow line length more than 80 characters
> -
>
> Key: HADOOP-17813
> URL: https://issues.apache.org/jira/browse/HADOOP-17813
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> Update the checkstyle rule to allow for 100 or 120 characters.
> Discussion thread: 
> [https://lists.apache.org/thread.html/r69c363fb365d4cfdec44433e7f6ec7d7eb3505067c2fcb793765068f%40%3Ccommon-dev.hadoop.apache.org%3E]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17813) Allow line length more than 80 characters

2021-07-21 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17813:
--

 Summary: Allow line length more than 80 characters
 Key: HADOOP-17813
 URL: https://issues.apache.org/jira/browse/HADOOP-17813
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Akira Ajisaka


Update the checkstyle definition to allow for 100 or 120 characters.

Discussion thread: 
https://lists.apache.org/thread.html/r69c363fb365d4cfdec44433e7f6ec7d7eb3505067c2fcb793765068f%40%3Ccommon-dev.hadoop.apache.org%3E



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16272) Update HikariCP to 4.0.3

2021-07-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16272.

Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

Committed to trunk. Thank you [~vjasani] for the contribution.

> Update HikariCP to 4.0.3
> 
>
> Key: HADOOP-16272
> URL: https://issues.apache.org/jira/browse/HADOOP-16272
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Yuming Wang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17799) Improve the GitHub pull request template

2021-07-15 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17381065#comment-17381065
 ] 

Akira Ajisaka commented on HADOOP-17799:


Thank you [~ste...@apache.org] for your comment. Yes, click on the checklists 
in rendered markdown, and then they get changed.

> Improve the GitHub pull request template
> 
>
> Key: HADOOP-17799
> URL: https://issues.apache.org/jira/browse/HADOOP-17799
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build, documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> The current Hadoop pull request template can be improved.
> - Require some information (e.g. 
> https://github.com/apache/spark/blob/master/.github/PULL_REQUEST_TEMPLATE)
> - Checklists (e.g. 
> https://github.com/apache/nifi/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
> - Move current notice to comment (i.e. surround with )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17799) Improve the GitHub pull request template

2021-07-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17799:
--

 Summary: Improve the GitHub pull request template
 Key: HADOOP-17799
 URL: https://issues.apache.org/jira/browse/HADOOP-17799
 Project: Hadoop Common
  Issue Type: Task
  Components: build, documentation
Reporter: Akira Ajisaka


The current Hadoop pull request template can be improved.

- Require some information (e.g. 
https://github.com/apache/spark/blob/master/.github/PULL_REQUEST_TEMPLATE)
- Checklists (e.g. 
https://github.com/apache/nifi/blob/main/.github/PULL_REQUEST_TEMPLATE.md)
- Move current notice to comment (i.e. surround with )



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17798) Always use GitHub PR rather than JIRA to review patches

2021-07-13 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17798:
--

 Summary: Always use GitHub PR rather than JIRA to review patches
 Key: HADOOP-17798
 URL: https://issues.apache.org/jira/browse/HADOOP-17798
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Reporter: Akira Ajisaka


Now there are 2 types of precommit jobs in https://ci-hadoop.apache.org/
(1) Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs that try to download 
patches from JIRA and test them.
(2) hadoop-multibranch job for GitHub PR

The problems are:
- The build configs are separated. The (2) config is in Jenkinsfile, and the 
(1) configs are in the Jenkins. When we update Jenkinsfile, I had to manually 
update the configs of the 4 precommit jobs via Jenkins Web UI.
- The (1) build configs are static. We cannot use separate config for each 
branch. This may cause some build failures.
- GitHub Actions cannot be used in the (1) jobs.

Therefore I want to disable the (1) jobs and always use GitHub PR to review 
patches.

How to do this:

1. Update the wiki: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute#HowToContribute-Provideapatch
2. Disable the Precommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17568) Mapred/YARN job fails due to kms-dt can't be found in cache with LoadBalancingKMSClientProvider + Kerberos

2021-07-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17568.

Resolution: Not A Bug

Closing this because the parameter has been documented in HADOOP-17794.

> Mapred/YARN job fails due to kms-dt can't be found in cache with 
> LoadBalancingKMSClientProvider + Kerberos
> --
>
> Key: HADOOP-17568
> URL: https://issues.apache.org/jira/browse/HADOOP-17568
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms, security
>Affects Versions: 3.2.2
>Reporter: Zbigniew Kostrzewa
>Priority: Major
>
> I deployed Hadoop 3.2.2 cluster with KMS in HA using 
> LoadBalancingKMSClientProvider with Kerberos authentication. KMS instances 
> are configured with ZooKeeper for storing the shared secret.
> I have created an encryption key and an encryption zone in `/test` directory 
> and executed `randomtextwriter` from mapreduce examples passing it a 
> sub-directory in the encryption zone:
> {code:java}
> hadoop jar hadoop-mapreduce-examples-3.2.2.jar randomtextwriter 
> /test/randomtextwriter
> {code}
> Unfortunately the job keeps failing with errors like:
> {code:java}
> java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:363)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.decryptEncryptedDataEncryptionKey(HdfsKMSUtil.java:212)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:972)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:952)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:536)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
>   at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1168)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:285)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.(MapTask.java:659)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:154)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:592)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:540)
>   

[jira] [Resolved] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2021-07-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-12665.

Fix Version/s: 3.3.2
   3.2.3
   2.10.2
   3.4.0
   Resolution: Fixed

Committed to all the active branches. Thanks!

> Document hadoop.security.token.service.use_ip
> -
>
> Key: HADOOP-12665
> URL: https://issues.apache.org/jira/browse/HADOOP-12665
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17793) Better token validation

2021-07-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17793:
--

Assignee: Artem Smotrakov

> Better token validation
> ---
>
> Key: HADOOP-17793
> URL: https://issues.apache.org/jira/browse/HADOOP-17793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Artem Smotrakov
>Assignee: Artem Smotrakov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
> Attachments: token.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> `MessageDigest.isEqual()` should be used for checking tokens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17793) Better token validation

2021-07-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17793:
---
Issue Type: Bug  (was: Improvement)

> Better token validation
> ---
>
> Key: HADOOP-17793
> URL: https://issues.apache.org/jira/browse/HADOOP-17793
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Artem Smotrakov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
> Attachments: token.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> `MessageDigest.isEqual()` should be used for checking tokens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17793) Better token validation

2021-07-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17793:
---
Fix Version/s: 3.3.2
   3.2.3
   2.10.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.3, branch-3.2, and branch-2.10.

> Better token validation
> ---
>
> Key: HADOOP-17793
> URL: https://issues.apache.org/jira/browse/HADOOP-17793
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Artem Smotrakov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
> Attachments: token.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> `MessageDigest.isEqual()` should be used for checking tokens.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17794) Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS

2021-07-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17794:
---
Component/s: kms

> Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS
> --
>
> Key: HADOOP-17794
> URL: https://issues.apache.org/jira/browse/HADOOP-17794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms, security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> The following parameters should be documented in 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html#Delegation_Tokens
> * hadoop.kms.authentication.zk-dt-secret-manager.enable
> * hadoop.kms.authentication.zk-dt-secret-manager.kerberos.keytab
> * hadoop.kms.authentication.zk-dt-secret-manager.kerberos.principal
> * hadoop.kms.authentication.zk-dt-secret-manager.zkConnectionString
> * hadoop.kms.authentication.zk-dt-secret-manager.znodeWorkingPath
> * hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2021-07-09 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-12665:
--

Assignee: Akira Ajisaka

> Document hadoop.security.token.service.use_ip
> -
>
> Key: HADOOP-12665
> URL: https://issues.apache.org/jira/browse/HADOOP-12665
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2021-07-09 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377920#comment-17377920
 ] 

Akira Ajisaka commented on HADOOP-12665:


Thank you [~mattf]. The document is still useful in Hadoop 3.x because the 
source code of this feature is almost the same as Hadoop 2.6+.

> Document hadoop.security.token.service.use_ip
> -
>
> Key: HADOOP-12665
> URL: https://issues.apache.org/jira/browse/HADOOP-12665
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17794) Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS

2021-07-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17794:
---
Status: Patch Available  (was: Open)

> Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS
> --
>
> Key: HADOOP-17794
> URL: https://issues.apache.org/jira/browse/HADOOP-17794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The following parameters should be documented in 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html#Delegation_Tokens
> * hadoop.kms.authentication.zk-dt-secret-manager.enable
> * hadoop.kms.authentication.zk-dt-secret-manager.kerberos.keytab
> * hadoop.kms.authentication.zk-dt-secret-manager.kerberos.principal
> * hadoop.kms.authentication.zk-dt-secret-manager.zkConnectionString
> * hadoop.kms.authentication.zk-dt-secret-manager.znodeWorkingPath
> * hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17568) Mapred/YARN job fails due to kms-dt can't be found in cache with LoadBalancingKMSClientProvider + Kerberos

2021-07-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377471#comment-17377471
 ] 

Akira Ajisaka commented on HADOOP-17568:


Yes, you need to set hadoop.kms.authentication.zk-dt-secret-manager.enable to 
true. I'll document this in HADOOP-17794.

Sorry for the late response.

> Mapred/YARN job fails due to kms-dt can't be found in cache with 
> LoadBalancingKMSClientProvider + Kerberos
> --
>
> Key: HADOOP-17568
> URL: https://issues.apache.org/jira/browse/HADOOP-17568
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms, security
>Affects Versions: 3.2.2
>Reporter: Zbigniew Kostrzewa
>Priority: Major
>
> I deployed Hadoop 3.2.2 cluster with KMS in HA using 
> LoadBalancingKMSClientProvider with Kerberos authentication. KMS instances 
> are configured with ZooKeeper for storing the shared secret.
> I have created an encryption key and an encryption zone in `/test` directory 
> and executed `randomtextwriter` from mapreduce examples passing it a 
> sub-directory in the encryption zone:
> {code:java}
> hadoop jar hadoop-mapreduce-examples-3.2.2.jar randomtextwriter 
> /test/randomtextwriter
> {code}
> Unfortunately the job keeps failing with errors like:
> {code:java}
> java.io.IOException: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at 
> org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider.decryptEncryptedKey(LoadBalancingKMSClientProvider.java:363)
>   at 
> org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.decryptEncryptedKey(KeyProviderCryptoExtension.java:532)
>   at 
> org.apache.hadoop.hdfs.HdfsKMSUtil.decryptEncryptedDataEncryptionKey(HdfsKMSUtil.java:212)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:972)
>   at 
> org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:952)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:536)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:530)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:544)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:471)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1125)
>   at 
> org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:1168)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:285)
>   at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:542)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
>   at 
> org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
>   at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.(MapTask.java:659)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:779)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)
> Caused by: 
> org.apache.hadoop.security.authentication.client.AuthenticationException: 
> org.apache.hadoop.security.token.SecretManager$InvalidToken: token (kms-dt 
> owner=packer, renewer=packer, realUser=, issueDate=1615146155993, 
> maxDate=1615750955993, sequenceNumber=1, masterKeyId=2) can't be found in 
> cache
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:154)
>   at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:592)
>   at 
> 

[jira] [Assigned] (HADOOP-17794) Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS

2021-07-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17794:
--

Assignee: Akira Ajisaka

> Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS
> --
>
> Key: HADOOP-17794
> URL: https://issues.apache.org/jira/browse/HADOOP-17794
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> The following parameters should be documented in 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html#Delegation_Tokens
> * hadoop.kms.authentication.zk-dt-secret-manager.enable
> * hadoop.kms.authentication.zk-dt-secret-manager.kerberos.keytab
> * hadoop.kms.authentication.zk-dt-secret-manager.kerberos.principal
> * hadoop.kms.authentication.zk-dt-secret-manager.zkConnectionString
> * hadoop.kms.authentication.zk-dt-secret-manager.znodeWorkingPath
> * hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17794) Add a sample configuration to use ZKDelegationTokenSecretManager in Hadoop KMS

2021-07-08 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17794:
--

 Summary: Add a sample configuration to use 
ZKDelegationTokenSecretManager in Hadoop KMS
 Key: HADOOP-17794
 URL: https://issues.apache.org/jira/browse/HADOOP-17794
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, security
Reporter: Akira Ajisaka


The following parameters should be documented in 
https://hadoop.apache.org/docs/stable/hadoop-kms/index.html#Delegation_Tokens

* hadoop.kms.authentication.zk-dt-secret-manager.enable
* hadoop.kms.authentication.zk-dt-secret-manager.kerberos.keytab
* hadoop.kms.authentication.zk-dt-secret-manager.kerberos.principal
* hadoop.kms.authentication.zk-dt-secret-manager.zkConnectionString
* hadoop.kms.authentication.zk-dt-secret-manager.znodeWorkingPath
* hadoop.kms.authentication.zk-dt-secret-manager.zkAuthType



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12665) Document hadoop.security.token.service.use_ip

2021-07-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377134#comment-17377134
 ] 

Akira Ajisaka commented on HADOOP-12665:


We had to set this parameter to false when deploying multi-homed Hadoop KMS. I 
want to contribute.

> Document hadoop.security.token.service.use_ip
> -
>
> Key: HADOOP-12665
> URL: https://issues.apache.org/jira/browse/HADOOP-12665
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Matthew Foley
>Priority: Major
>
> {{hadoop.security.token.service.use_ip}} is not documented in 2.x/trunk.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented

2021-07-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17792.

Resolution: Duplicate

> "hadoop.security.token.service.use_ip" should be documented
> ---
>
> Key: HADOOP-17792
> URL: https://issues.apache.org/jira/browse/HADOOP-17792
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> hadoop.security.token.service.use_ip is not documented in core-default.xml. 
> It should be documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented

2021-07-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17377130#comment-17377130
 ] 

Akira Ajisaka commented on HADOOP-17792:


Closing as duplicate.

> "hadoop.security.token.service.use_ip" should be documented
> ---
>
> Key: HADOOP-17792
> URL: https://issues.apache.org/jira/browse/HADOOP-17792
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> hadoop.security.token.service.use_ip is not documented in core-default.xml. 
> It should be documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17792) "hadoop.security.token.service.use_ip" should be documented

2021-07-07 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17792:
--

 Summary: "hadoop.security.token.service.use_ip" should be 
documented
 Key: HADOOP-17792
 URL: https://issues.apache.org/jira/browse/HADOOP-17792
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Akira Ajisaka


hadoop.security.token.service.use_ip is not documented in core-default.xml. It 
should be documented.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17775) Remove JavaScript package from Docker environment

2021-07-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17775:
---
Fix Version/s: 3.3.2
   3.2.3
   2.10.2
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged. Thank you [~iwasakims] for your contribution!

> Remove JavaScript package from Docker environment
> -
>
> Key: HADOOP-17775
> URL: https://issues.apache.org/jira/browse/HADOOP-17775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.3, 3.3.2
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> As described in the [README of 
> yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
>  required javascript modules are automatically pulled by 
> frontend-maven-plugin. We can leverage them for local testing too.
> While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
> node.js, the version of node.js does not match. JavaScript related packages 
> of the docker environment is not sure to work.
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17775) Remove JavaScript package from Docker environment

2021-07-06 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17775:
---
Fix Version/s: 3.4.0

Thank you [~iwasakims]. Would you open a PR for the lower branches?

> Remove JavaScript package from Docker environment
> -
>
> Key: HADOOP-17775
> URL: https://issues.apache.org/jira/browse/HADOOP-17775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> As described in the [README of 
> yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
>  required javascript modules are automatically pulled by 
> frontend-maven-plugin. We can leverage them for local testing too.
> While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
> node.js, the version of node.js does not match. JavaScript related packages 
> of the docker environment is not sure to work.
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17787) Refactor fetching of credentials in Jenkins

2021-07-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17374129#comment-17374129
 ] 

Akira Ajisaka commented on HADOOP-17787:


I'm not sure HADOOP-17778 broke the build because the Jenkinsfile is not 
actually used in the PreCommit-(HADOOP|HDFS|MAPREDUCE|YARN)-Build jobs. Now I 
want to disable the precommit jobs on JIRA and move to GitHub because we cannot 
differently configure the jobs per branch.

> Refactor fetching of credentials in Jenkins
> ---
>
> Key: HADOOP-17787
> URL: https://issues.apache.org/jira/browse/HADOOP-17787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-07-03-10-47-02-330.png
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Need to refactor fetching of credentials in Jenkinsfile.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-22 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17762.

Resolution: Duplicate

Probably this issue has been fixed by INFRA-22020.

> branch-2.10 daily build fails to pull latest changes
> 
>
> Key: HADOOP-17762
> URL: https://issues.apache.org/jira/browse/HADOOP-17762
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, yetus
>Affects Versions: 2.10.1
>Reporter: Ahmed Hussein
>Priority: Major
>
> I noticed that the build for branch-2.10 failed to pull the latest changes 
> for the last few days.
> CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]
> https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console
> {code:bash}
> Started by timer
> Running as SYSTEM
> Building remotely on H20 (Hadoop) in workspace 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
> The recommended git tool is: NONE
> No credentials specified
> Cloning the remote Git repository
> Using shallow clone with depth 10
> Avoid fetching tags
> Cloning repository https://github.com/apache/hadoop
> ERROR: Failed to clean the workspace
> jenkins.util.io.CompositeIOException: Unable to delete 
> '/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
>  Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. 
> (Discarded 1 additional exceptions)
>   at 
> jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
>   at hudson.Util.deleteContentsRecursive(Util.java:262)
>   at hudson.Util.deleteContentsRecursive(Util.java:251)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:211)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)
>   at hudson.remoting.Request$2.run(Request.java:375)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   Suppressed: java.nio.file.AccessDeniedException: 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:457)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> 

[jira] [Commented] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-22 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367120#comment-17367120
 ] 

Akira Ajisaka commented on HADOOP-17762:


Started manually: 
https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/337

> branch-2.10 daily build fails to pull latest changes
> 
>
> Key: HADOOP-17762
> URL: https://issues.apache.org/jira/browse/HADOOP-17762
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, yetus
>Affects Versions: 2.10.1
>Reporter: Ahmed Hussein
>Priority: Major
>
> I noticed that the build for branch-2.10 failed to pull the latest changes 
> for the last few days.
> CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]
> https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console
> {code:bash}
> Started by timer
> Running as SYSTEM
> Building remotely on H20 (Hadoop) in workspace 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
> The recommended git tool is: NONE
> No credentials specified
> Cloning the remote Git repository
> Using shallow clone with depth 10
> Avoid fetching tags
> Cloning repository https://github.com/apache/hadoop
> ERROR: Failed to clean the workspace
> jenkins.util.io.CompositeIOException: Unable to delete 
> '/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
>  Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. 
> (Discarded 1 additional exceptions)
>   at 
> jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
>   at hudson.Util.deleteContentsRecursive(Util.java:262)
>   at hudson.Util.deleteContentsRecursive(Util.java:251)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:211)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)
>   at hudson.remoting.Request$2.run(Request.java:375)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   Suppressed: java.nio.file.AccessDeniedException: 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:457)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> 

[jira] [Commented] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-22 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367099#comment-17367099
 ] 

Akira Ajisaka commented on HADOOP-17762:


Disabled build on H20 host for a temporary fix.

> branch-2.10 daily build fails to pull latest changes
> 
>
> Key: HADOOP-17762
> URL: https://issues.apache.org/jira/browse/HADOOP-17762
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, yetus
>Affects Versions: 2.10.1
>Reporter: Ahmed Hussein
>Priority: Major
>
> I noticed that the build for branch-2.10 failed to pull the latest changes 
> for the last few days.
> CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]
> https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console
> {code:bash}
> Started by timer
> Running as SYSTEM
> Building remotely on H20 (Hadoop) in workspace 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
> The recommended git tool is: NONE
> No credentials specified
> Cloning the remote Git repository
> Using shallow clone with depth 10
> Avoid fetching tags
> Cloning repository https://github.com/apache/hadoop
> ERROR: Failed to clean the workspace
> jenkins.util.io.CompositeIOException: Unable to delete 
> '/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
>  Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. 
> (Discarded 1 additional exceptions)
>   at 
> jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
>   at hudson.Util.deleteContentsRecursive(Util.java:262)
>   at hudson.Util.deleteContentsRecursive(Util.java:251)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:211)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)
>   at hudson.remoting.Request$2.run(Request.java:375)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   Suppressed: java.nio.file.AccessDeniedException: 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:457)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> 

[jira] [Commented] (HADOOP-17762) branch-2.10 daily build fails to pull latest changes

2021-06-22 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367093#comment-17367093
 ] 

Akira Ajisaka commented on HADOOP-17762:


There is a file with broken permission and the workspace cannot be cleared on 
the H20 host.  Filed https://issues.apache.org/jira/browse/INFRA-22020.

> branch-2.10 daily build fails to pull latest changes
> 
>
> Key: HADOOP-17762
> URL: https://issues.apache.org/jira/browse/HADOOP-17762
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, yetus
>Affects Versions: 2.10.1
>Reporter: Ahmed Hussein
>Priority: Major
>
> I noticed that the build for branch-2.10 failed to pull the latest changes 
> for the last few days.
> CC: [~aajisaka], [~tasanuma], [~Jim_Brennan]
> https://ci-hadoop.apache.org/job/hadoop-qbt-branch-2.10-java7-linux-x86_64/329/console
> {code:bash}
> Started by timer
> Running as SYSTEM
> Building remotely on H20 (Hadoop) in workspace 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64
> The recommended git tool is: NONE
> No credentials specified
> Cloning the remote Git repository
> Using shallow clone with depth 10
> Avoid fetching tags
> Cloning repository https://github.com/apache/hadoop
> ERROR: Failed to clean the workspace
> jenkins.util.io.CompositeIOException: Unable to delete 
> '/home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir'.
>  Tried 3 times (of a maximum of 3) waiting 0.1 sec between attempts. 
> (Discarded 1 additional exceptions)
>   at 
> jenkins.util.io.PathRemover.forceRemoveDirectoryContents(PathRemover.java:90)
>   at hudson.Util.deleteContentsRecursive(Util.java:262)
>   at hudson.Util.deleteContentsRecursive(Util.java:251)
>   at 
> org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:743)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:161)
>   at 
> org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$GitCommandMasterToSlaveCallable.call(RemoteGitImpl.java:154)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:211)
>   at hudson.remoting.UserRequest.perform(UserRequest.java:54)
>   at hudson.remoting.Request$2.run(Request.java:375)
>   at 
> hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:73)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>   Suppressed: java.nio.file.AccessDeniedException: 
> /home/jenkins/jenkins-home/workspace/hadoop-qbt-branch-2.10-java7-linux-x86_64/sourcedir/hadoop-hdfs-project/hadoop-hdfs/target/test/data/3/dfs/data/data1/current
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
>   at java.nio.file.Files.newDirectoryStream(Files.java:457)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:224)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> jenkins.util.io.PathRemover.tryRemoveDirectoryContents(PathRemover.java:226)
>   at 
> jenkins.util.io.PathRemover.tryRemoveRecursive(PathRemover.java:215)
>   at 
> 

[jira] [Commented] (HADOOP-17759) Remove Hadoop 3.1.4 from the download page

2021-06-21 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17366407#comment-17366407
 ] 

Akira Ajisaka commented on HADOOP-17759:


The artifacts have been removed and the download page has been updated. Closing.

> Remove Hadoop 3.1.4 from the download page
> --
>
> Key: HADOOP-17759
> URL: https://issues.apache.org/jira/browse/HADOOP-17759
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> Since Hadoop 3.1.x is EoL, 3.1.4 should be removed from 
> https://hadoop.apache.org/releases.html.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17759) Remove Hadoop 3.1.4 from the download page

2021-06-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17759.

Resolution: Done

> Remove Hadoop 3.1.4 from the download page
> --
>
> Key: HADOOP-17759
> URL: https://issues.apache.org/jira/browse/HADOOP-17759
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> Since Hadoop 3.1.x is EoL, 3.1.4 should be removed from 
> https://hadoop.apache.org/releases.html.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17759) Remove Hadoop 3.1.4 from the download page

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17759:
---
Issue Type: Task  (was: Bug)

> Remove Hadoop 3.1.4 from the download page
> --
>
> Key: HADOOP-17759
> URL: https://issues.apache.org/jira/browse/HADOOP-17759
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> Since Hadoop 3.1.x is EoL, 3.1.4 should be removed from 
> https://hadoop.apache.org/releases.html.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17759) Remove Hadoop 3.1.4 from the download page

2021-06-10 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17361132#comment-17361132
 ] 

Akira Ajisaka commented on HADOOP-17759:


After removing 3.1.4 from the download page, we have to clean up the release 
artifacts 
[https://dist.apache.org/repos/dist/release/hadoop/common/hadoop-3.1.4/]

> Remove Hadoop 3.1.4 from the download page
> --
>
> Key: HADOOP-17759
> URL: https://issues.apache.org/jira/browse/HADOOP-17759
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Priority: Major
>
> Since Hadoop 3.1.x is EoL, 3.1.4 should be removed from 
> https://hadoop.apache.org/releases.html.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17759) Remove Hadoop 3.1.4 from the download page

2021-06-10 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17759:
--

 Summary: Remove Hadoop 3.1.4 from the download page
 Key: HADOOP-17759
 URL: https://issues.apache.org/jira/browse/HADOOP-17759
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Akira Ajisaka


Since Hadoop 3.1.x is EoL, 3.1.4 should be removed from 
https://hadoop.apache.org/releases.html.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17728) Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17728:
---
Fix Version/s: (was: 3.3.1)
   3.3.2

Updated the fix version because this commit is not in branch-3.3.1.

FYI: The commit message is "HDFS-16033 Fix issue of the 
StatisticsDataReferenceCleaner cleanUp (#3042)"

> Deadlock in FileSystem StatisticsDataReferenceCleaner cleanUp
> -
>
> Key: HADOOP-17728
> URL: https://issues.apache.org/jira/browse/HADOOP-17728
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.2.1
>Reporter: yikf
>Assignee: yikf
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 5h 10m
>  Remaining Estimate: 0h
>
> Cleaner thread will be blocked if we remove reference from ReferenceQueue 
> unless the `queue.enqueue` called.
> 
>     As shown below, We call ReferenceQueue.remove() now while cleanUp, Call 
> chain as follow:
>                          *StatisticsDataReferenceCleaner#queue.remove()  ->  
> ReferenceQueue.remove(0)  -> lock.wait(0)*
>     But, lock.notifyAll is called when queue.enqueue only, so Cleaner thread 
> will be blocked.
>  
> ThreadDump:
> {code:java}
> "Reference Handler" #2 daemon prio=10 os_prio=0 tid=0x7f7afc088800 
> nid=0x2119 in Object.wait() [0x7f7b0023]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at java.lang.Object.wait(Object.java:502)
> at java.lang.ref.Reference.tryHandlePending(Reference.java:191)
> - locked <0xc00c2f58> (a java.lang.ref.Reference$Lock)
> at 
> java.lang.ref.Reference$ReferenceHandler.run(Reference.java:153){code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17078) hadoop-shaded-protobuf_3_7 depends on the wrong version.

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17078.

Resolution: Duplicate

> hadoop-shaded-protobuf_3_7 depends on the wrong version.
> 
>
> Key: HADOOP-17078
> URL: https://issues.apache.org/jira/browse/HADOOP-17078
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: JiangHua Zhu
>Priority: Major
>
> When using maven to compile hadoop source code, the following exception 
> message appears:
> [*INFO*] 
> **
> [*INFO*] *BUILD FAILURE*
> [*INFO*] 
> **
> [*INFO*] Total time:  29.546 s
> [*INFO*] Finished at: 2020-06-20T23:57:59+08:00
> [*INFO*] 
> **
> [*ERROR*] Failed to execute goal on project hadoop-common: *Could not resolve 
> dependencies for project org.apache.hadoop:hadoop-common:jar:3.3.0-SNAPSHOT: 
> Could not find artifact 
> org.apache.hadoop.thirdparty:hadoop-shaded-protobuf_3_7:jar:1.0.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots)* -> *[Help 1]*
> [*ERROR*] 
> [*ERROR*] To see the full stack trace of the errors, re-run Maven with the 
> *-e* switch.
> [*ERROR*] Re-run Maven using the *-X* switch to enable full debug logging.
> [*ERROR*] 
> [*ERROR*] For more information about the errors and possible solutions, 
> please read the following articles:
> [*ERROR*] *[Help 1]* 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [*ERROR*] 
> [*ERROR*] After correcting the problems, you can resume the build with the 
> command
> [*ERROR*]   *mvn  -rf :hadoop-common*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16988) Remove source code from branch-2

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16988.

Resolution: Done

> Remove source code from branch-2
> 
>
> Key: HADOOP-16988
> URL: https://issues.apache.org/jira/browse/HADOOP-16988
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now, branch-2 is dead and unused. I think we can delete the entire source 
> code from branch-2 to avoid committing or cherry-picking to the unused branch.
> Chen Liang asked ASF INFRA for help but it didn't help for us: INFRA-19581



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16988) Remove source code from branch-2

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-16988:


> Remove source code from branch-2
> 
>
> Key: HADOOP-16988
> URL: https://issues.apache.org/jira/browse/HADOOP-16988
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now, branch-2 is dead and unused. I think we can delete the entire source 
> code from branch-2 to avoid committing or cherry-picking to the unused branch.
> Chen Liang asked ASF INFRA for help but it didn't help for us: INFRA-19581



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17078) hadoop-shaded-protobuf_3_7 depends on the wrong version.

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17078:


> hadoop-shaded-protobuf_3_7 depends on the wrong version.
> 
>
> Key: HADOOP-17078
> URL: https://issues.apache.org/jira/browse/HADOOP-17078
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.1
>Reporter: JiangHua Zhu
>Priority: Major
>
> When using maven to compile hadoop source code, the following exception 
> message appears:
> [*INFO*] 
> **
> [*INFO*] *BUILD FAILURE*
> [*INFO*] 
> **
> [*INFO*] Total time:  29.546 s
> [*INFO*] Finished at: 2020-06-20T23:57:59+08:00
> [*INFO*] 
> **
> [*ERROR*] Failed to execute goal on project hadoop-common: *Could not resolve 
> dependencies for project org.apache.hadoop:hadoop-common:jar:3.3.0-SNAPSHOT: 
> Could not find artifact 
> org.apache.hadoop.thirdparty:hadoop-shaded-protobuf_3_7:jar:1.0.0-SNAPSHOT in 
> apache.snapshots.https 
> (https://repository.apache.org/content/repositories/snapshots)* -> *[Help 1]*
> [*ERROR*] 
> [*ERROR*] To see the full stack trace of the errors, re-run Maven with the 
> *-e* switch.
> [*ERROR*] Re-run Maven using the *-X* switch to enable full debug logging.
> [*ERROR*] 
> [*ERROR*] For more information about the errors and possible solutions, 
> please read the following articles:
> [*ERROR*] *[Help 1]* 
> http://cwiki.apache.org/confluence/display/MAVEN/DependencyResolutionException
> [*ERROR*] 
> [*ERROR*] After correcting the problems, you can resume the build with the 
> command
> [*ERROR*]   *mvn  -rf :hadoop-common*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14936) S3Guard: remove "experimental" from documentation

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14936:
---
Fix Version/s: 3.3.0

> S3Guard: remove "experimental" from documentation
> -
>
> Key: HADOOP-14936
> URL: https://issues.apache.org/jira/browse/HADOOP-14936
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
>
> I think it is time to remove the "experimental feature" designation in the 
> site docs for S3Guard.  Discuss.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17228) Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17228.

Resolution: Won't Fix

branch-3.1 is EOL, won't fix.

> Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x
> --
>
> Key: HADOOP-17228
> URL: https://issues.apache.org/jira/browse/HADOOP-17228
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Backport a small subset of HADOOP-17199 to branch-3.1
> No path capabities, declarative test syntax etc
> just
> -getFileStatus/list
> -markers changes to bucket-info
> -startup info message if option is set
> -relevant test changes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17228) Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17228:


> Backport HADOOP-13230 listing changes for preserved directory markers to 3.1.x
> --
>
> Key: HADOOP-17228
> URL: https://issues.apache.org/jira/browse/HADOOP-17228
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> Backport a small subset of HADOOP-17199 to branch-3.1
> No path capabities, declarative test syntax etc
> just
> -getFileStatus/list
> -markers changes to bucket-info
> -startup info message if option is set
> -relevant test changes



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17097) start-build-env.sh fails in branch-3.1

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17097.

Resolution: Won't Fix

branch-3.1 is EoL, won't fix.

> start-build-env.sh fails in branch-3.1
> --
>
> Key: HADOOP-17097
> URL: https://issues.apache.org/jira/browse/HADOOP-17097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Ubuntu 20.04
>Reporter: Akira Ajisaka
>Assignee: Masatake Iwasaki
>Priority: Critical
>
> ./start-build-env.sh fails to install ember-cli
> {noformat}
> npm ERR! Linux 5.4.0-37-generic
> npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! node v4.2.6
> npm ERR! npm  v3.5.2
> npm ERR! code EMISSINGARG
> npm ERR! typeerror Error: Missing required argument #1
> npm ERR! typeerror at andLogAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:31:3)
> npm ERR! typeerror at fetchPackageMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:51:22)
> npm ERR! typeerror at resolveWithNewModule 
> (/usr/share/npm/lib/install/deps.js:456:12)
> npm ERR! typeerror at /usr/share/npm/lib/install/deps.js:457:7
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror at /usr/share/npm/lib/fetch-package-metadata.js:37:12
> npm ERR! typeerror at addRequestedAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:82:5)
> npm ERR! typeerror at returnAndAddMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:117:7)
> npm ERR! typeerror at pickVersionFromRegistryDocument 
> (/usr/share/npm/lib/fetch-package-metadata.js:134:20)
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror This is an error with npm itself. Please report this error 
> at:
> npm ERR! typeerror 
> npm ERR! Please include the following file with any support request:
> npm ERR! /root/npm-debug.log
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17550) property 'ssl.server.keystore.location' has not been set in the ssl configuration file

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17550:


> property 'ssl.server.keystore.location' has not been set in the ssl 
> configuration file
> --
>
> Key: HADOOP-17550
> URL: https://issues.apache.org/jira/browse/HADOOP-17550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.8.5
>Reporter: hamado dene
>Priority: Major
>
> I trying to install hadoop cluster HA , but datanode does not start properly; 
> I get this errror:
> 2021-02-23 17:13:26,934 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.io.IOException: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:199)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1303)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2609)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753)
> Caused by: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:152)
> at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:148)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:197)
> ... 8 more
> But in my ssl-server.xml i correctly set this property:
> 
> ssl.server.keystore.location
> /data/hadoop/server.jks
> Keystore to be used by clients like distcp. Must be
> specified.
> 
> 
> 
> ssl.server.keystore.password
> 
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.keypassword
> x
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.type
> jks
> Optional. The keystore file format, default value is "jks".
> 
> 
> Do you have any suggestion to solve this problem?
> my hadoop version is: 2.8.5
> java version: 8
> SO: centos 7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17097) start-build-env.sh fails in branch-3.1

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17097:


> start-build-env.sh fails in branch-3.1
> --
>
> Key: HADOOP-17097
> URL: https://issues.apache.org/jira/browse/HADOOP-17097
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
> Environment: Ubuntu 20.04
>Reporter: Akira Ajisaka
>Assignee: Masatake Iwasaki
>Priority: Critical
>
> ./start-build-env.sh fails to install ember-cli
> {noformat}
> npm ERR! Linux 5.4.0-37-generic
> npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "-g" "ember-cli"
> npm ERR! node v4.2.6
> npm ERR! npm  v3.5.2
> npm ERR! code EMISSINGARG
> npm ERR! typeerror Error: Missing required argument #1
> npm ERR! typeerror at andLogAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:31:3)
> npm ERR! typeerror at fetchPackageMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:51:22)
> npm ERR! typeerror at resolveWithNewModule 
> (/usr/share/npm/lib/install/deps.js:456:12)
> npm ERR! typeerror at /usr/share/npm/lib/install/deps.js:457:7
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror at /usr/share/npm/lib/fetch-package-metadata.js:37:12
> npm ERR! typeerror at addRequestedAndFinish 
> (/usr/share/npm/lib/fetch-package-metadata.js:82:5)
> npm ERR! typeerror at returnAndAddMetadata 
> (/usr/share/npm/lib/fetch-package-metadata.js:117:7)
> npm ERR! typeerror at pickVersionFromRegistryDocument 
> (/usr/share/npm/lib/fetch-package-metadata.js:134:20)
> npm ERR! typeerror at /usr/share/npm/node_modules/iferr/index.js:13:50
> npm ERR! typeerror This is an error with npm itself. Please report this error 
> at:
> npm ERR! typeerror 
> npm ERR! Please include the following file with any support request:
> npm ERR! /root/npm-debug.log
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16993) Hadoop 3.1.2 download link is broken

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16993:
---
Fix Version/s: asf-site

> Hadoop 3.1.2 download link is broken
> 
>
> Key: HADOOP-16993
> URL: https://issues.apache.org/jira/browse/HADOOP-16993
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: website
>Reporter: Arpit Agarwal
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: asf-site
>
>
> Remove broken Hadoop 3.1.2 download links from the website.
> https://hadoop.apache.org/releases.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17550) property 'ssl.server.keystore.location' has not been set in the ssl configuration file

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17550.

Resolution: Not A Problem

> property 'ssl.server.keystore.location' has not been set in the ssl 
> configuration file
> --
>
> Key: HADOOP-17550
> URL: https://issues.apache.org/jira/browse/HADOOP-17550
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.8.5
>Reporter: hamado dene
>Priority: Major
>
> I trying to install hadoop cluster HA , but datanode does not start properly; 
> I get this errror:
> 2021-02-23 17:13:26,934 ERROR 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
> java.io.IOException: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:199)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:905)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1303)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:481)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2609)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2497)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2544)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2729)
> at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2753)
> Caused by: java.security.GeneralSecurityException: The property 
> 'ssl.server.keystore.location' has not been set in the ssl configuration file.
> at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:152)
> at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:148)
> at 
> org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:197)
> ... 8 more
> But in my ssl-server.xml i correctly set this property:
> 
> ssl.server.keystore.location
> /data/hadoop/server.jks
> Keystore to be used by clients like distcp. Must be
> specified.
> 
> 
> 
> ssl.server.keystore.password
> 
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.keypassword
> x
> Optional. Default value is "".
> 
> 
> 
> ssl.server.keystore.type
> jks
> Optional. The keystore file format, default value is "jks".
> 
> 
> Do you have any suggestion to solve this problem?
> my hadoop version is: 2.8.5
> java version: 8
> SO: centos 7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17623) Add a publish section to the .asf.yaml file

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17623:
---
Fix Version/s: asf-site

> Add a publish section to the .asf.yaml file
> ---
>
> Key: HADOOP-17623
> URL: https://issues.apache.org/jira/browse/HADOOP-17623
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: asf-site
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17651) Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update Jetty to 9.4.39

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-17651.

Resolution: Won't Fix

branch-3.1 is EoL. Closing as won't fix.

> Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update 
> Jetty to 9.4.39
> -
>
> Key: HADOOP-17651
> URL: https://issues.apache.org/jira/browse/HADOOP-17651
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-17651) Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update Jetty to 9.4.39

2021-06-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-17651:


> Backport to branch-3.1 HADOOP-17371, HADOOP-17621, HADOOP-17625 to update 
> Jetty to 9.4.39
> -
>
> Key: HADOOP-17651
> URL: https://issues.apache.org/jira/browse/HADOOP-17651
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.2.3
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17544) Mark KeyProvider as Stable

2021-06-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17355646#comment-17355646
 ] 

Akira Ajisaka commented on HADOOP-17544:


Hi [~shv], what do you think?

> Mark KeyProvider as Stable
> --
>
> Key: HADOOP-17544
> URL: https://issues.apache.org/jira/browse/HADOOP-17544
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Now, o.a.h.crypto.key.KeyProvider.java is marked Public and Unstable. I think 
> the class is very stable, and it should be annotated as Stable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17544) Mark KeyProvider as Stable

2021-06-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17355645#comment-17355645
 ] 

Akira Ajisaka commented on HADOOP-17544:


FYI: LinkedIn used the KeyProvider interface to integrate the company's 
internal service.
 
[https://engineering.linkedin.com/blog/2021/the-exabyte-club--linkedin-s-journey-of-scaling-the-hadoop-distr]
{quote}LinkedIn has its own key management service, LiKMS, which is the only 
service certified and approved for managing cryptographic keys and secrets 
internally. We used pluggable interfaces such as KeyProvider supported by HDFS 
to integrate LiKMS with transparent encryption at rest.
{quote}

> Mark KeyProvider as Stable
> --
>
> Key: HADOOP-17544
> URL: https://issues.apache.org/jira/browse/HADOOP-17544
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Now, o.a.h.crypto.key.KeyProvider.java is marked Public and Unstable. I think 
> the class is very stable, and it should be annotated as Stable.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17563) Update Bouncy Castle to 1.68

2021-05-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17563:
---
Fix Version/s: (was: 3.2.3)
   (was: 3.4.0)
   (was: 3.3.1)

Since the patch has been reverted, I've reopened and removed the fix versions.

> Update Bouncy Castle to 1.68
> 
>
> Key: HADOOP-17563
> URL: https://issues.apache.org/jira/browse/HADOOP-17563
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> -Bouncy Castle 1.60 has Hash Collision Vulnerability. Let's update to 1.68.-
> Bouncy Castle 1.60 has the following vulnerabilities. Let's update to 1.68.
>  * [https://nvd.nist.gov/vuln/detail/CVE-2020-26939]
>  * [https://nvd.nist.gov/vuln/detail/CVE-2020-28052]
>  * [https://nvd.nist.gov/vuln/detail/CVE-2020-15522]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



<    1   2   3   4   5   6   7   8   9   10   >