[jira] [Updated] (HADOOP-18889) S3A: V2 SDK client does not work with third-party store

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18889:

Target Version/s: 3.4.0, 3.3.7-aws  (was: 3.4.0)

> S3A: V2 SDK client does not work with third-party store
> ---
>
> Key: HADOOP-18889
> URL: https://issues.apache.org/jira/browse/HADOOP-18889
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> testing against an external store without specifying region now blows up 
> because the region is queried off eu-west-1.
> What are we do to here? require the region setting *which wasn't needed 
> before? what even region do we provide for third party stores?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18889) S3A: V2 SDK client does not work with third-party store

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18889:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0
 Description: 
testing against an external store without specifying region now blows up 
because the region is queried off eu-west-1.

What are we do to here? require the region setting *which wasn't needed before? 
what even region do we provide for third party stores?


  was:

testing against an external store without specifying region now blows up 
because the region is queried off eu-west-1.

What are we do to here? require the region setting *which wasn't needed before? 
what even region do we provide for third party stores?



> S3A: V2 SDK client does not work with third-party store
> ---
>
> Key: HADOOP-18889
> URL: https://issues.apache.org/jira/browse/HADOOP-18889
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> testing against an external store without specifying region now blows up 
> because the region is queried off eu-west-1.
> What are we do to here? require the region setting *which wasn't needed 
> before? what even region do we provide for third party stores?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18890) remove okhttp usage

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18890:

Hadoop Flags: Reviewed

> remove okhttp usage
> ---
>
> Key: HADOOP-18890
> URL: https://issues.apache.org/jira/browse/HADOOP-18890
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, common
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> * relates to HADOOP-18496
> * simplifies the dependencies if hadoop doesn't use multiple 3rd party libs 
> to make http calls
> * okhttp brings in other dependencies like the kotlin runtime
> * hadoop already uses apache httpclient in some places



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18905) Negative timeout in ZKFailovercontroller due to overflow

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18905:

Target Version/s: 3.4.0

> Negative timeout in ZKFailovercontroller due to overflow
> 
>
> Key: HADOOP-18905
> URL: https://issues.apache.org/jira/browse/HADOOP-18905
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.6
>Reporter: ConfX
>Assignee: ConfX
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Graceful fence timeout of FailoverController in ZKFailovercontroller equals 
> to `ha.failover-controller.graceful-fence.rpc-timeout.ms` * 2. Since users 
> are unaware of this calculation, it thus has risks of overflowing to a 
> negative number if users set 
> `ha.failover-controller.graceful-fence.rpc-timeout.ms` to a large value.
>  
> To reproduce:
> 1. set `ha.failover-controller.graceful-fence.rpc-timeout.ms` to 1092752431
> 2. run `mvn surefire:test 
> -Dtest=org.apache.hadoop.ha.TestZKFailoverController#testGracefulFailoverFailBecomingStandby`
>  
> We create a PR that provides a fix by checking the timeout after 
> multiplication is at least 0.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18905) Negative timeout in ZKFailovercontroller due to overflow

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18905:

Component/s: common

> Negative timeout in ZKFailovercontroller due to overflow
> 
>
> Key: HADOOP-18905
> URL: https://issues.apache.org/jira/browse/HADOOP-18905
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.6
>Reporter: ConfX
>Assignee: ConfX
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Graceful fence timeout of FailoverController in ZKFailovercontroller equals 
> to `ha.failover-controller.graceful-fence.rpc-timeout.ms` * 2. Since users 
> are unaware of this calculation, it thus has risks of overflowing to a 
> negative number if users set 
> `ha.failover-controller.graceful-fence.rpc-timeout.ms` to a large value.
>  
> To reproduce:
> 1. set `ha.failover-controller.graceful-fence.rpc-timeout.ms` to 1092752431
> 2. run `mvn surefire:test 
> -Dtest=org.apache.hadoop.ha.TestZKFailoverController#testGracefulFailoverFailBecomingStandby`
>  
> We create a PR that provides a fix by checking the timeout after 
> multiplication is at least 0.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18915) Tune/extend S3A http connection and thread pool settings

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18915:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0
 Description: 
Increases existing pool sizes, as with server scale and vector
IO, larger pools are needed

  fs.s3a.connection.maximum 200
  fs.s3a.threads.max 96

Adds new configuration options for v2 sdk internal timeouts,
both with default of 60s:

  fs.s3a.connection.acquisition.timeout
  fs.s3a.connection.idle.time

All the pool/timoeut options are covered in performance.md

Moves all timeout/duration options in the s3a FS to taking
temporal units (h, m, s, ms,...); retaining the previous default
unit (normally millisecond)

Adds a minimum duration for most of these, in order to recover from
deployments where a timeout has been set on the assumption the unit
was seconds, not millis.

Uses java.time.Duration throughout the codebase;
retaining the older numeric constants in
org.apache.hadoop.fs.s3a.Constants for backwards compatibility;
these are now deprecated.

Adds new class AWSApiCallTimeoutException to be raised on
sdk-related methods and also gateway timeouts. This is a subclass
of org.apache.hadoop.net.ConnectTimeoutException to support
existing retry logic.

+ reverted default value of fs.s3a.create.performance to false; 
inadvertently set to true during testing.


  was:


Increases existing pool sizes, as with server scale and vector
IO, larger pools are needed

  fs.s3a.connection.maximum 200
  fs.s3a.threads.max 96

Adds new configuration options for v2 sdk internal timeouts,
both with default of 60s:

  fs.s3a.connection.acquisition.timeout
  fs.s3a.connection.idle.time

All the pool/timoeut options are covered in performance.md

Moves all timeout/duration options in the s3a FS to taking
temporal units (h, m, s, ms,...); retaining the previous default
unit (normally millisecond)

Adds a minimum duration for most of these, in order to recover from
deployments where a timeout has been set on the assumption the unit
was seconds, not millis.

Uses java.time.Duration throughout the codebase;
retaining the older numeric constants in
org.apache.hadoop.fs.s3a.Constants for backwards compatibility;
these are now deprecated.

Adds new class AWSApiCallTimeoutException to be raised on
sdk-related methods and also gateway timeouts. This is a subclass
of org.apache.hadoop.net.ConnectTimeoutException to support
existing retry logic.

+ reverted default value of fs.s3a.create.performance to false; 
inadvertently set to true during testing.



> Tune/extend S3A http connection and thread pool settings
> 
>
> Key: HADOOP-18915
> URL: https://issues.apache.org/jira/browse/HADOOP-18915
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> Increases existing pool sizes, as with server scale and vector
> IO, larger pools are needed
>   fs.s3a.connection.maximum 200
>   fs.s3a.threads.max 96
> Adds new configuration options for v2 sdk internal timeouts,
> both with default of 60s:
>   fs.s3a.connection.acquisition.timeout
>   fs.s3a.connection.idle.time
> All the pool/timoeut options are covered in performance.md
> Moves all timeout/duration options in the s3a FS to taking
> temporal units (h, m, s, ms,...); retaining the previous default
> unit (normally millisecond)
> Adds a minimum duration for most of these, in order to recover from
> deployments where a timeout has been set on the assumption the unit
> was seconds, not millis.
> Uses java.time.Duration throughout the codebase;
> retaining the older numeric constants in
> org.apache.hadoop.fs.s3a.Constants for backwards compatibility;
> these are now deprecated.
> Adds new class AWSApiCallTimeoutException to be raised on
> sdk-related methods and also gateway timeouts. This is a subclass
> of org.apache.hadoop.net.ConnectTimeoutException to support
> existing retry logic.
> + reverted default value of fs.s3a.create.performance to false; 
> inadvertently set to true during testing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18918) ITestS3GuardTool fails if SSE/DSSE encryption is used

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18918:

Target Version/s: 3.4.0

> ITestS3GuardTool fails if SSE/DSSE encryption is used
> -
>
> Key: HADOOP-18918
> URL: https://issues.apache.org/jira/browse/HADOOP-18918
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.6
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> {code:java}
> [ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 25.989 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardTool
> [ERROR] 
> testLandsatBucketRequireUnencrypted(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardTool)
>   Time elapsed: 0.807 s  <<< ERROR!
> 46: Bucket s3a://landsat-pds: required encryption is none but actual 
> encryption is DSSE-KMS
>     at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.exitException(S3GuardTool.java:915)
>     at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.badState(S3GuardTool.java:881)
>     at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(S3GuardTool.java:511)
>     at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:283)
>     at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
>     at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:963)
>     at 
> org.apache.hadoop.fs.s3a.s3guard.S3GuardToolTestHelper.runS3GuardCommand(S3GuardToolTestHelper.java:147)
>     at 
> org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:114)
>     at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardTool.testLandsatBucketRequireUnencrypted(ITestS3GuardTool.java:74)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
>     at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>     at java.lang.Thread.run(Thread.java:750)
>  {code}
> Since landsat requires none encryption, the test should be skipped for any 
> encryption algorithm.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18908) Improve s3a region handling, including determining from endpoint

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18908:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0, 3.3.7-aws
 Description: 
S3A region logic improved for better inference and
to be compatible with previous releases

1. If you are using an AWS S3 AccessPoint, its region is determined
   from the ARN itself.
2. If fs.s3a.endpoint.region is set and non-empty, it is used.
3. If fs.s3a.endpoint is an s3.*.amazonaws.com url, 
   the region is determined by by parsing the URL 
   Note: vpce endpoints are not handled by this.
4. If fs.s3a.endpoint.region==null, and none could be determined
   from the endpoint, use us-east-2 as default.
5. If fs.s3a.endpoint.region=="" then it is handed off to
   The default AWS SDK resolution process.

Consult the AWS SDK documentation for the details on its resolution
process, knowing that it is complicated and may use environment variables,
entries in ~/.aws/config, IAM instance information within
EC2 deployments and possibly even JSON resources on the classpath.
Put differently: it is somewhat brittle across deployments.


  was:

S3A region logic improved for better inference and
to be compatible with previous releases

1. If you are using an AWS S3 AccessPoint, its region is determined
   from the ARN itself.
2. If fs.s3a.endpoint.region is set and non-empty, it is used.
3. If fs.s3a.endpoint is an s3.*.amazonaws.com url, 
   the region is determined by by parsing the URL 
   Note: vpce endpoints are not handled by this.
4. If fs.s3a.endpoint.region==null, and none could be determined
   from the endpoint, use us-east-2 as default.
5. If fs.s3a.endpoint.region=="" then it is handed off to
   The default AWS SDK resolution process.

Consult the AWS SDK documentation for the details on its resolution
process, knowing that it is complicated and may use environment variables,
entries in ~/.aws/config, IAM instance information within
EC2 deployments and possibly even JSON resources on the classpath.
Put differently: it is somewhat brittle across deployments.



> Improve s3a region handling, including determining from endpoint
> 
>
> Key: HADOOP-18908
> URL: https://issues.apache.org/jira/browse/HADOOP-18908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> S3A region logic improved for better inference and
> to be compatible with previous releases
> 1. If you are using an AWS S3 AccessPoint, its region is determined
>from the ARN itself.
> 2. If fs.s3a.endpoint.region is set and non-empty, it is used.
> 3. If fs.s3a.endpoint is an s3.*.amazonaws.com url, 
>the region is determined by by parsing the URL 
>Note: vpce endpoints are not handled by this.
> 4. If fs.s3a.endpoint.region==null, and none could be determined
>from the endpoint, use us-east-2 as default.
> 5. If fs.s3a.endpoint.region=="" then it is handed off to
>The default AWS SDK resolution process.
> Consult the AWS SDK documentation for the details on its resolution
> process, knowing that it is complicated and may use environment variables,
> entries in ~/.aws/config, IAM instance information within
> EC2 deployments and possibly even JSON resources on the classpath.
> Put differently: it is somewhat brittle across deployments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18919) Zookeeper SSL/TLS support in HDFS ZKFC

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18919:

Component/s: common

> Zookeeper SSL/TLS support in HDFS ZKFC
> --
>
> Key: HADOOP-18919
> URL: https://issues.apache.org/jira/browse/HADOOP-18919
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Zita Dombi
>Assignee: Zita Dombi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client.
> In YARN-11468 the SSL communication can be set in Yarn, now we need to 
> similar changes in HDFS to enable it correctly. In HDFS ZK Client is used in 
> the Failover Controller. In this improvement we need to create the ZK client 
> with the necessary SSL configs if we enable it, which we can track under a 
> new HDFS config.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18920) RPC Metrics : Optimize logic for log slow RPCs

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18920:

Target Version/s: 3.4.0

> RPC Metrics : Optimize logic for log slow RPCs
> --
>
> Key: HADOOP-18920
> URL: https://issues.apache.org/jira/browse/HADOOP-18920
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.4.0
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-12325 implement a capability where "slow" RPCs are logged in NN log.
> Current processing logic is the "slow" RPCs are to be those whose processing 
> time is outside 3 standard deviation.
> However, in practice it is found that many logs of slow rpc are currently 
> output, and sometimes RPCs with a processing time of 1ms are also declared as 
> slow, this is not in line with actual expectations.
> Therefore, consider optimize the logic conditions of slow RPC and add a 
> `logSlowRPCThresholdMs` variable to judge whether the current RPCas slow so 
> that the expected slow RPC log can be logger.
> for `logSlowRPCThresholdMs`, we can support dynamic refresh to facilitate 
> adjustments based on the actual operating conditions of the hdfs cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18920) RPC Metrics : Optimize logic for log slow RPCs

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18920:

Affects Version/s: 3.4.0

> RPC Metrics : Optimize logic for log slow RPCs
> --
>
> Key: HADOOP-18920
> URL: https://issues.apache.org/jira/browse/HADOOP-18920
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-12325 implement a capability where "slow" RPCs are logged in NN log.
> Current processing logic is the "slow" RPCs are to be those whose processing 
> time is outside 3 standard deviation.
> However, in practice it is found that many logs of slow rpc are currently 
> output, and sometimes RPCs with a processing time of 1ms are also declared as 
> slow, this is not in line with actual expectations.
> Therefore, consider optimize the logic conditions of slow RPC and add a 
> `logSlowRPCThresholdMs` variable to judge whether the current RPCas slow so 
> that the expected slow RPC log can be logger.
> for `logSlowRPCThresholdMs`, we can support dynamic refresh to facilitate 
> adjustments based on the actual operating conditions of the hdfs cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18919) Zookeeper SSL/TLS support in HDFS ZKFC

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18919:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> Zookeeper SSL/TLS support in HDFS ZKFC
> --
>
> Key: HADOOP-18919
> URL: https://issues.apache.org/jira/browse/HADOOP-18919
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Zita Dombi
>Assignee: Zita Dombi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client.
> In YARN-11468 the SSL communication can be set in Yarn, now we need to 
> similar changes in HDFS to enable it correctly. In HDFS ZK Client is used in 
> the Failover Controller. In this improvement we need to create the ZK client 
> with the necessary SSL configs if we enable it, which we can track under a 
> new HDFS config.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18919) Zookeeper SSL/TLS support in HDFS ZKFC

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18919:

Affects Version/s: 3.4.0

> Zookeeper SSL/TLS support in HDFS ZKFC
> --
>
> Key: HADOOP-18919
> URL: https://issues.apache.org/jira/browse/HADOOP-18919
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Zita Dombi
>Assignee: Zita Dombi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client.
> In YARN-11468 the SSL communication can be set in Yarn, now we need to 
> similar changes in HDFS to enable it correctly. In HDFS ZK Client is used in 
> the Failover Controller. In this improvement we need to create the ZK client 
> with the necessary SSL configs if we enable it, which we can track under a 
> new HDFS config.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18920) RPC Metrics : Optimize logic for log slow RPCs

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18920:

Component/s: metrics

> RPC Metrics : Optimize logic for log slow RPCs
> --
>
> Key: HADOOP-18920
> URL: https://issues.apache.org/jira/browse/HADOOP-18920
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Affects Versions: 3.4.0
>Reporter: Haiyang Hu
>Assignee: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-12325 implement a capability where "slow" RPCs are logged in NN log.
> Current processing logic is the "slow" RPCs are to be those whose processing 
> time is outside 3 standard deviation.
> However, in practice it is found that many logs of slow rpc are currently 
> output, and sometimes RPCs with a processing time of 1ms are also declared as 
> slow, this is not in line with actual expectations.
> Therefore, consider optimize the logic conditions of slow RPC and add a 
> `logSlowRPCThresholdMs` variable to judge whether the current RPCas slow so 
> that the expected slow RPC log can be logger.
> for `logSlowRPCThresholdMs`, we can support dynamic refresh to facilitate 
> adjustments based on the actual operating conditions of the hdfs cluster.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18923) Switch to SPDX identifier for license name

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18923:

Affects Version/s: 3.4.0
   3.3.7

> Switch to SPDX identifier for license name
> --
>
> Key: HADOOP-18923
> URL: https://issues.apache.org/jira/browse/HADOOP-18923
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0, 3.3.7
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.5, 3.3.7
>
>
> [https://maven.apache.org/pom.html#Licenses]
> "Using an [SPDX identifier|https://spdx.org/licenses/] as the license name is 
> recommended."
> The Apache parent pom is already using this identifier



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18923) Switch to SPDX identifier for license name

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18923:

Target Version/s: 3.4.0, 3.3.7

> Switch to SPDX identifier for license name
> --
>
> Key: HADOOP-18923
> URL: https://issues.apache.org/jira/browse/HADOOP-18923
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0, 3.3.7
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.5, 3.3.7
>
>
> [https://maven.apache.org/pom.html#Licenses]
> "Using an [SPDX identifier|https://spdx.org/licenses/] as the license name is 
> recommended."
> The Apache parent pom is already using this identifier



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18923) Switch to SPDX identifier for license name

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18923:

Component/s: common

> Switch to SPDX identifier for license name
> --
>
> Key: HADOOP-18923
> URL: https://issues.apache.org/jira/browse/HADOOP-18923
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0, 3.3.7
>Reporter: Colm O hEigeartaigh
>Assignee: Colm O hEigeartaigh
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.5, 3.3.7
>
>
> [https://maven.apache.org/pom.html#Licenses]
> "Using an [SPDX identifier|https://spdx.org/licenses/] as the license name is 
> recommended."
> The Apache parent pom is already using this identifier



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18936) Upgrade to jetty 9.4.53

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18936?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18936:

Target Version/s: 3.4.0

> Upgrade to jetty 9.4.53
> ---
>
> Key: HADOOP-18936
> URL: https://issues.apache.org/jira/browse/HADOOP-18936
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> 2 CVE fixes in 
> https://github.com/jetty/jetty.project/releases/tag/jetty-9.4.53.v20231009
> 4 more security fixes in 
> https://github.com/jetty/jetty.project/releases/tag/jetty-9.4.52.v20230823



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18946) S3A: testMultiObjectExceptionFilledIn() assertion error

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18946:

Hadoop Flags: Reviewed

> S3A: testMultiObjectExceptionFilledIn() assertion error
> ---
>
> Key: HADOOP-18946
> URL: https://issues.apache.org/jira/browse/HADOOP-18946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> Failure in the new test of HADOOP-18939.
> I've been fiddling with the sdk upgrade, and only merged HADOOP-18932 after 
> submitting the new pr, so maybe, just maybe, the SDK changed some defaults.
> anyway, 
> {code}
> [ERROR] 
> testMultiObjectExceptionFilledIn(org.apache.hadoop.fs.s3a.impl.TestErrorTranslation)
>   Time elapsed: 0.026 s  <<< FAILURE!
> java.lang.AssertionError: retry policy of MultiObjectException
> at org.junit.Assert.fail(Assert.java:89)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at 
> {code}
> easily fixed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18948) S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on rename/delete

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18948:

Hadoop Flags: Reviewed
 Description: 
On third-party stores without lifecycle rules its possible to accrue many GB of 
pending multipart uploads, including from
* magic committer jobs where spark driver/MR AM failed before commit/abort
* distcp jobs which timeout and get aborted
* any client code writing datasets which are interrupted before close.

Although there's a purge pending uploads option, that's dangerous because if 
any fs is instantiated with it, it can destroy in-flight work

otherwise, the "hadoop s3guard uploads" command does work but needs 
scheduling/manual execution

proposed: add a new property {{fs.s3a.directory.operations.purge.uploads}} 
which will automatically cancel all pending uploads under a path
* delete: everything under the dir
* rename: all under the source dir

This will be done in parallel to the normal operation, but no attempt to post 
abortMultipartUploads in different threads. The assumption here is that this is 
rare. And it'll be off by default as in AWS people should have rules for these 
things.


+ doc (third_party?)
+ add new counter/metric for abort operations, count and duration
+ test to include cost assertions




  was:

On third-party stores without lifecycle rules its possible to accrue many GB of 
pending multipart uploads, including from
* magic committer jobs where spark driver/MR AM failed before commit/abort
* distcp jobs which timeout and get aborted
* any client code writing datasets which are interrupted before close.

Although there's a purge pending uploads option, that's dangerous because if 
any fs is instantiated with it, it can destroy in-flight work

otherwise, the "hadoop s3guard uploads" command does work but needs 
scheduling/manual execution

proposed: add a new property {{fs.s3a.directory.operations.purge.uploads}} 
which will automatically cancel all pending uploads under a path
* delete: everything under the dir
* rename: all under the source dir

This will be done in parallel to the normal operation, but no attempt to post 
abortMultipartUploads in different threads. The assumption here is that this is 
rare. And it'll be off by default as in AWS people should have rules for these 
things.


+ doc (third_party?)
+ add new counter/metric for abort operations, count and duration
+ test to include cost assertions





> S3A. Add option fs.s3a.directory.operations.purge.uploads to purge on 
> rename/delete
> ---
>
> Key: HADOOP-18948
> URL: https://issues.apache.org/jira/browse/HADOOP-18948
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> On third-party stores without lifecycle rules its possible to accrue many GB 
> of pending multipart uploads, including from
> * magic committer jobs where spark driver/MR AM failed before commit/abort
> * distcp jobs which timeout and get aborted
> * any client code writing datasets which are interrupted before close.
> Although there's a purge pending uploads option, that's dangerous because if 
> any fs is instantiated with it, it can destroy in-flight work
> otherwise, the "hadoop s3guard uploads" command does work but needs 
> scheduling/manual execution
> proposed: add a new property {{fs.s3a.directory.operations.purge.uploads}} 
> which will automatically cancel all pending uploads under a path
> * delete: everything under the dir
> * rename: all under the source dir
> This will be done in parallel to the normal operation, but no attempt to post 
> abortMultipartUploads in different threads. The assumption here is that this 
> is rare. And it'll be off by default as in AWS people should have rules for 
> these things.
> + doc (third_party?)
> + add new counter/metric for abort operations, count and duration
> + test to include cost assertions



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18956) Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and ZKSignerSecretProvider

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18956:

Hadoop Flags: Reviewed

> Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and 
> ZKSignerSecretProvider
> --
>
> Key: HADOOP-18956
> URL: https://issues.apache.org/jira/browse/HADOOP-18956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Zita Dombi
>Assignee: István Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client. That change also did changes 
> in ZKCuratorManager, so with that it is easy to set the SSL/TLS, for Yarn it 
> was done in YARN-11468.
> In DelegationTokenAuthenticationFilter currently we are using 
> CuratorFrameworkFactory, it'd be good to change it to use ZKCuratorManager 
> and with that we should support SSL/TLS enablement.
> *UPDATE*
> So as I investigated this a bit more, it wouldn't be so easy to move to using 
> ZKCuratorManager. 
> DelegationTokenAuthenticationFilter uses ZK from two places: in 
> ZKDelegationTokenSecretManager and in ZKSignerSecretProvider. In both places 
> it uses CuratorFrameworkFactory, but the attributes and creation 
> differentiates from ZKCuratorManager. 
> In ZKDelegationTokenSecretManager it would be easy to add the new config and 
> based on that create ZK with CuratorFrameworkFactory. But 
> ZKSignerSecretProvider is in hadoop-auth module and with my change it would 
> need hadoop-common, so it would introduce circular dependency between modules 
> 'hadoop-auth' and 'hadoop-common'. I'm still working on a straightforward 
> solution. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18969) S3A: AbstractS3ACostTest to clear bucket fs.s3a.create.performance flag

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18969:

Hadoop Flags: Reviewed

> S3A:  AbstractS3ACostTest to clear bucket fs.s3a.create.performance flag
> 
>
> Key: HADOOP-18969
> URL: https://issues.apache.org/jira/browse/HADOOP-18969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> If there's a bucket-specific  fs.s3a.create.performance flag then the create 
> tests can fail as the costs are lower than expected. 
> trivial fix: add to the removeBaseAndBucketOverrides list



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18995) S3A: Upgrade AWS SDK version to 2.21.33 for Amazon S3 Express One Zone support

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18995:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0, 3.3.7-aws

> S3A: Upgrade AWS SDK version to 2.21.33 for Amazon S3 Express One Zone support
> --
>
> Key: HADOOP-18995
> URL: https://issues.apache.org/jira/browse/HADOOP-18995
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> Upgrade SDK version to 2.21.33, which adds S3 Express One Zone support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18996) S3A to provide full support for S3 Express One Zone

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18996:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> S3A to provide full support for S3 Express One Zone
> ---
>
> Key: HADOOP-18996
> URL: https://issues.apache.org/jira/browse/HADOOP-18996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> HADOOP-18995 upgrades the SDK version which allows connecting to a s3 express 
> one zone support. 
> Complete support needs to be added to address tests that fail with s3 express 
> one zone, additional tests, documentation etc. 
> * hadoop-common path capability to indicate that treewalking may encounter 
> missing dirs
> * use this in treewalking code in shell, mapreduce FileInputFormat etc to not 
> fail during treewalks
> * extra path capability for s3express too.
> * tests for this
> * anything else



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18997) S3A: Add option fs.s3a.s3express.create.session to enable/disable CreateSession

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18997:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> S3A: Add option fs.s3a.s3express.create.session to enable/disable 
> CreateSession
> ---
>
> Key: HADOOP-18997
> URL: https://issues.apache.org/jira/browse/HADOOP-18997
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> add a way to disable the need to use the createsession call, so as to allow 
> for
> * simplifying our role test runs
> * benchmarking the performance hit
> * troubleshooting IAM permissions
> this can also be disabled from the sysprop "aws.disableS3ExpressAuth"



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19010) NullPointerException in Hadoop Credential Check CLI Command

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19010:

Target Version/s: 3.4.0

> NullPointerException in Hadoop Credential Check CLI Command
> ---
>
> Key: HADOOP-19010
> URL: https://issues.apache.org/jira/browse/HADOOP-19010
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Anika Kelhanka
>Assignee: Anika Kelhanka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> *Description*: Hadoop's credential check throws {{NullPointerException}} when 
> alias not found.
> {code:bash}
> hadoop credential check "fs.gs.proxy.username" -provider 
> "jceks://file/usr/lib/hive/conf/hive.jceks" {code}
> Checking aliases for CredentialProvider: 
> jceks://file/usr/lib/hive/conf/hive.jceks
> Enter alias password: 
> java.lang.NullPointerException
> at
> org.apache.hadoop.security.alias.CredentialShell$CheckCommand.execute(CredentialShell.java:369)
> at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:73)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
> at 
> org.apache.hadoop.security.alias.CredentialShell.main(CredentialShell.java:529)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19017) Setup pre-commit CI for Windows 10

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19017:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> Setup pre-commit CI for Windows 10
> --
>
> Key: HADOOP-19017
> URL: https://issues.apache.org/jira/browse/HADOOP-19017
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: build
>Affects Versions: 3.4.0
> Environment: Windows 10
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Critical
>  Labels: Jenkins, pull-request-available
> Fix For: 3.4.0
>
>
> We need to setup a pre-commit CI for validating the Hadoop PRs against 
> Windows 10.
> On a sidenote, we've got the nightly Jenkins CI running for Hadoop on Windows 
> 10 - 
> https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-win10-x86_64/.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19051) Hadoop 3.4.0 Big feature/improvement highlight addendum

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19051:

Target Version/s: 3.4.0, 3.5.0

> Hadoop 3.4.0 Big feature/improvement highlight addendum
> ---
>
> Key: HADOOP-19051
> URL: https://issues.apache.org/jira/browse/HADOOP-19051
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Benjamin Teke
>Assignee: Benjamin Teke
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0
>
>
> Capacity Scheduler was redesigned to add new capacity modes, it should be 
> mentioned as part of 3.4.0 YARN improvements. Reference: 
> YARN-10496/YARN-10888/YARN-10889



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19020) Update the year to 2024

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19020:

Target Version/s: 3.4.0, 2.10.3, 3.3.7

> Update the year to 2024
> ---
>
> Key: HADOOP-19020
> URL: https://issues.apache.org/jira/browse/HADOOP-19020
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.3, 3.3.7
>
>
> Update the year to 2024



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18925:

Hadoop Flags: Reviewed

> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18926) Add documentation related to NodeFencer

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18926:

Target Version/s: 3.4.0

> Add documentation related to NodeFencer
> ---
>
> Key: HADOOP-18926
> URL: https://issues.apache.org/jira/browse/HADOOP-18926
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, ha
>Affects Versions: 3.3.4, 3.3.6
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: screenshot-1.png
>
>
> In the NodeFencer file, some important comments are missing.
> Happens here:
>  !screenshot-1.png! 
> The guidance instructions for ShellCommandFencer are missing here.
> If it is improved, the robustness of the distributed system can be increased.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18927) S3ARetryHandler to treat SocketExceptions as connectivity failures

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18927:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> S3ARetryHandler to treat SocketExceptions as connectivity failures
> --
>
> Key: HADOOP-18927
> URL: https://issues.apache.org/jira/browse/HADOOP-18927
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.4.0
>
>
> i've got a v1 sdk stack trace where a TCP connection reset is breaking a 
> large upload. that should be recoverable with retries.
> {code}
> com.amazonaws.SdkClientException: Unable to execute HTTP request: Connection 
> reset by peer: Unable to execute HTTP request: Connection reset by peer at...
> {code}
> proposed:
> * S3ARetryPolicy to map SocketException to connectivity failure
> * See if we can create a test for this, ideally under the aws sdk.
> I'm now unsure about how well we handle these io problems...a quick 
> experiment with the 3.3.5 release shows that the retry policy retries on 
> whatever exception chain has an unknown host for the endpoint. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18929) Build failure while trying to create apache 3.3.7 release locally.

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18929:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> Build failure while trying to create apache 3.3.7 release locally.
> --
>
> Key: HADOOP-18929
> URL: https://issues.apache.org/jira/browse/HADOOP-18929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: PJ Fanning
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> {noformat}
> [ESC[1;34mINFOESC[m] ESC[1m---< 
> ESC[0;36morg.apache.hadoop:hadoop-client-check-test-invariantsESC[0;1m 
> >ESC[m
> [ESC[1;34mINFOESC[m] ESC[1mBuilding Apache Hadoop Client Packaging Invariants 
> for Test 3.3.9-SNAPSHOT [105/111]ESC[m
> [ESC[1;34mINFOESC[m] ESC[1m[ pom 
> ]-ESC[m
> [ESC[1;34mINFOESC[m] 
> [ESC[1;34mINFOESC[m] ESC[1m--- 
> ESC[0;32mmaven-enforcer-plugin:3.0.0-M1:enforceESC[m 
> ESC[1m(enforce-banned-dependencies)ESC[m @ 
> ESC[36mhadoop-client-check-test-invariantsESC[0;1m ---ESC[m
> [ESC[1;34mINFOESC[m] Adding ignorable dependency: 
> org.apache.hadoop:hadoop-annotations:null
> [ESC[1;34mINFOESC[m]   Adding ignore: *
> [ESC[1;33mWARNINGESC[m] Rule 1: 
> org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message:
> Duplicate classes found:
>   Found in:
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.9-SNAPSHOT:compile
>     org.apache.hadoop:hadoop-client-runtime:jar:3.3.9-SNAPSHOT:compile
>   Duplicate classes:
>     META-INF/versions/9/module-info.class
> {noformat}
> CC [~ste...@apache.org]  [~weichu] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18929) Build failure while trying to create apache 3.3.7 release locally.

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18929:

Component/s: build
 (was: common)

> Build failure while trying to create apache 3.3.7 release locally.
> --
>
> Key: HADOOP-18929
> URL: https://issues.apache.org/jira/browse/HADOOP-18929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: PJ Fanning
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> {noformat}
> [ESC[1;34mINFOESC[m] ESC[1m---< 
> ESC[0;36morg.apache.hadoop:hadoop-client-check-test-invariantsESC[0;1m 
> >ESC[m
> [ESC[1;34mINFOESC[m] ESC[1mBuilding Apache Hadoop Client Packaging Invariants 
> for Test 3.3.9-SNAPSHOT [105/111]ESC[m
> [ESC[1;34mINFOESC[m] ESC[1m[ pom 
> ]-ESC[m
> [ESC[1;34mINFOESC[m] 
> [ESC[1;34mINFOESC[m] ESC[1m--- 
> ESC[0;32mmaven-enforcer-plugin:3.0.0-M1:enforceESC[m 
> ESC[1m(enforce-banned-dependencies)ESC[m @ 
> ESC[36mhadoop-client-check-test-invariantsESC[0;1m ---ESC[m
> [ESC[1;34mINFOESC[m] Adding ignorable dependency: 
> org.apache.hadoop:hadoop-annotations:null
> [ESC[1;34mINFOESC[m]   Adding ignore: *
> [ESC[1;33mWARNINGESC[m] Rule 1: 
> org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message:
> Duplicate classes found:
>   Found in:
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.9-SNAPSHOT:compile
>     org.apache.hadoop:hadoop-client-runtime:jar:3.3.9-SNAPSHOT:compile
>   Duplicate classes:
>     META-INF/versions/9/module-info.class
> {noformat}
> CC [~ste...@apache.org]  [~weichu] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18933) upgrade netty to 4.1.100 due to CVE

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18933:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> upgrade netty to 4.1.100 due to CVE
> ---
>
> Key: HADOOP-18933
> URL: https://issues.apache.org/jira/browse/HADOOP-18933
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.6
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> follow up to https://issues.apache.org/jira/browse/HADOOP-18783
> https://netty.io/news/2023/10/10/4-1-100-Final.html
> security advisory 
> https://github.com/netty/netty/security/advisories/GHSA-xpw8-rcwv-8f8p
> "HTTP/2 Rapid Reset Attack - DDoS vector in the HTTP/2 protocol due RST 
> framesHTTP/2 Rapid Reset Attack - DDoS vector in the HTTP/2 protocol due RST 
> frames



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18929) Build failure while trying to create apache 3.3.7 release locally.

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18929:

Component/s: common

> Build failure while trying to create apache 3.3.7 release locally.
> --
>
> Key: HADOOP-18929
> URL: https://issues.apache.org/jira/browse/HADOOP-18929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: PJ Fanning
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> {noformat}
> [ESC[1;34mINFOESC[m] ESC[1m---< 
> ESC[0;36morg.apache.hadoop:hadoop-client-check-test-invariantsESC[0;1m 
> >ESC[m
> [ESC[1;34mINFOESC[m] ESC[1mBuilding Apache Hadoop Client Packaging Invariants 
> for Test 3.3.9-SNAPSHOT [105/111]ESC[m
> [ESC[1;34mINFOESC[m] ESC[1m[ pom 
> ]-ESC[m
> [ESC[1;34mINFOESC[m] 
> [ESC[1;34mINFOESC[m] ESC[1m--- 
> ESC[0;32mmaven-enforcer-plugin:3.0.0-M1:enforceESC[m 
> ESC[1m(enforce-banned-dependencies)ESC[m @ 
> ESC[36mhadoop-client-check-test-invariantsESC[0;1m ---ESC[m
> [ESC[1;34mINFOESC[m] Adding ignorable dependency: 
> org.apache.hadoop:hadoop-annotations:null
> [ESC[1;34mINFOESC[m]   Adding ignore: *
> [ESC[1;33mWARNINGESC[m] Rule 1: 
> org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message:
> Duplicate classes found:
>   Found in:
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.9-SNAPSHOT:compile
>     org.apache.hadoop:hadoop-client-runtime:jar:3.3.9-SNAPSHOT:compile
>   Duplicate classes:
>     META-INF/versions/9/module-info.class
> {noformat}
> CC [~ste...@apache.org]  [~weichu] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18932) Upgrade AWS v2 SDK to 2.20.160 and v1 to 1.12.565

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18932:

Hadoop Flags: Reviewed

> Upgrade AWS v2 SDK to 2.20.160 and v1 to 1.12.565
> -
>
> Key: HADOOP-18932
> URL: https://issues.apache.org/jira/browse/HADOOP-18932
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> Bump up the sdk versions for both...even if we don't ship v1 it helps us 
> qualify releases with newer versions, and means that an upgrade of that alone 
> to branch-3.3 will be in sync.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18941) Modify HBase version in BUILDING.txt

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18941:

Target Version/s: 3.4.0

> Modify HBase version in BUILDING.txt
> 
>
> Key: HADOOP-18941
> URL: https://issues.apache.org/jira/browse/HADOOP-18941
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Zepeng Zhang
>Assignee: Zepeng Zhang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> In current BUILDING.txt document, the version of HBase, which is used by YARN 
> Timeline Service V2 is a bit older than the actual one. Hence, I hereby 
> request to modify this uncertain description in the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18941) Modify HBase version in BUILDING.txt

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18941:

Component/s: common

> Modify HBase version in BUILDING.txt
> 
>
> Key: HADOOP-18941
> URL: https://issues.apache.org/jira/browse/HADOOP-18941
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Zepeng Zhang
>Assignee: Zepeng Zhang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> In current BUILDING.txt document, the version of HBase, which is used by YARN 
> Timeline Service V2 is a bit older than the actual one. Hence, I hereby 
> request to modify this uncertain description in the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18939) NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry()

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18939:

Hadoop Flags: Reviewed

> NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry()
> -
>
> Key: HADOOP-18939
> URL: https://issues.apache.org/jira/browse/HADOOP-18939
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> NPE in error handling code of RetryOnErrorCodeCondition.shouldRetry(); in 
> bundle-2.20.128.jar
> This is AWS SDK code; fix needs to go there. 
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> software.amazon.awssdk.awscore.retry.conditions.RetryOnErrorCodeCondition.shouldRetry(RetryOnErrorCodeCondition.java:45)
>  ~[bundle-2.20.128.jar:?]
>   at 
> software.amazon.awssdk.core.retry.conditions.OrRetryCondition.lambda$shouldRetry$0(OrRetryCondition.java:46)
>  ~[bundle-2.20.128.jar:?]
>   at java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90) 
> ~[?:1.8.0_382]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18941) Modify HBase version in BUILDING.txt

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18941:

Affects Version/s: 3.4.0

> Modify HBase version in BUILDING.txt
> 
>
> Key: HADOOP-18941
> URL: https://issues.apache.org/jira/browse/HADOOP-18941
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Zepeng Zhang
>Assignee: Zepeng Zhang
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> In current BUILDING.txt document, the version of HBase, which is used by YARN 
> Timeline Service V2 is a bit older than the actual one. Hence, I hereby 
> request to modify this uncertain description in the document.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18939) NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry()

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18939:

Hadoop Flags: Reviewed

> NPE in AWS v2 SDK RetryOnErrorCodeCondition.shouldRetry()
> -
>
> Key: HADOOP-18939
> URL: https://issues.apache.org/jira/browse/HADOOP-18939
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> NPE in error handling code of RetryOnErrorCodeCondition.shouldRetry(); in 
> bundle-2.20.128.jar
> This is AWS SDK code; fix needs to go there. 
> {code}
> Caused by: java.lang.NullPointerException
>   at 
> software.amazon.awssdk.awscore.retry.conditions.RetryOnErrorCodeCondition.shouldRetry(RetryOnErrorCodeCondition.java:45)
>  ~[bundle-2.20.128.jar:?]
>   at 
> software.amazon.awssdk.core.retry.conditions.OrRetryCondition.lambda$shouldRetry$0(OrRetryCondition.java:46)
>  ~[bundle-2.20.128.jar:?]
>   at java.util.stream.MatchOps$1MatchSink.accept(MatchOps.java:90) 
> ~[?:1.8.0_382]
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18942) Upgrade ZooKeeper to 3.7.2

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18942:

Target Version/s: 3.4.0

> Upgrade ZooKeeper to 3.7.2
> --
>
> Key: HADOOP-18942
> URL: https://issues.apache.org/jira/browse/HADOOP-18942
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.7
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7
>
>
> While HADOOP-18613 is proposing to upgrade ZooKeeper to 3.8, it will bring 
> dependency conflicts. Upgrading to ZooKeeper 3.7 could be alternative 
> short-term fix for addressing CVEs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18942) Upgrade ZooKeeper to 3.7.2

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18942:

Component/s: common

> Upgrade ZooKeeper to 3.7.2
> --
>
> Key: HADOOP-18942
> URL: https://issues.apache.org/jira/browse/HADOOP-18942
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.7
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7
>
>
> While HADOOP-18613 is proposing to upgrade ZooKeeper to 3.8, it will bring 
> dependency conflicts. Upgrading to ZooKeeper 3.7 could be alternative 
> short-term fix for addressing CVEs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18946) S3A: testMultiObjectExceptionFilledIn() assertion error

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18946:

Target Version/s: 3.4.0
 Description: 
Failure in the new test of HADOOP-18939.

I've been fiddling with the sdk upgrade, and only merged HADOOP-18932 after 
submitting the new pr, so maybe, just maybe, the SDK changed some defaults.

anyway, 

{code}
[ERROR] 
testMultiObjectExceptionFilledIn(org.apache.hadoop.fs.s3a.impl.TestErrorTranslation)
  Time elapsed: 0.026 s  <<< FAILURE!
java.lang.AssertionError: retry policy of MultiObjectException
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at 
{code}

easily fixed

  was:

Failure in the new test of HADOOP-18939.

I've been fiddling with the sdk upgrade, and only merged HADOOP-18932 after 
submitting the new pr, so maybe, just maybe, the SDK changed some defaults.

anyway, 

{code}
[ERROR] 
testMultiObjectExceptionFilledIn(org.apache.hadoop.fs.s3a.impl.TestErrorTranslation)
  Time elapsed: 0.026 s  <<< FAILURE!
java.lang.AssertionError: retry policy of MultiObjectException
at org.junit.Assert.fail(Assert.java:89)
at org.junit.Assert.assertTrue(Assert.java:42)
at 
{code}

easily fixed


> S3A: testMultiObjectExceptionFilledIn() assertion error
> ---
>
> Key: HADOOP-18946
> URL: https://issues.apache.org/jira/browse/HADOOP-18946
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> Failure in the new test of HADOOP-18939.
> I've been fiddling with the sdk upgrade, and only merged HADOOP-18932 after 
> submitting the new pr, so maybe, just maybe, the SDK changed some defaults.
> anyway, 
> {code}
> [ERROR] 
> testMultiObjectExceptionFilledIn(org.apache.hadoop.fs.s3a.impl.TestErrorTranslation)
>   Time elapsed: 0.026 s  <<< FAILURE!
> java.lang.AssertionError: retry policy of MultiObjectException
> at org.junit.Assert.fail(Assert.java:89)
> at org.junit.Assert.assertTrue(Assert.java:42)
> at 
> {code}
> easily fixed



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18945) S3A: IAMInstanceCredentialsProvider failing: Failed to load credentials from IMDS

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18945:

 Hadoop Flags: Reviewed
 Target Version/s: 3.4.0
Affects Version/s: 3.4.0
   (was: 7.2.18.0)

> S3A: IAMInstanceCredentialsProvider failing: Failed to load credentials from 
> IMDS
> -
>
> Key: HADOOP-18945
> URL: https://issues.apache.org/jira/browse/HADOOP-18945
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> Failures in impala test VMs using iAM for auth
> {code}
> Failed to open file as a parquet file: java.net.SocketTimeoutException: 
> re-open 
> s3a://impala-test-uswest2-1/test-warehouse/test_pre_gregorian_date_parquet_2e80ae30.db/hive2_pre_gregorian.parquet
>  at 84 on 
> s3a://impala-test-uswest2-1/test-warehouse/test_pre_gregorian_date_parquet_2e80ae30.db/hive2_pre_gregorian.parquet:
>  org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: +: Failed to load 
> credentials from IMDS
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18942) Upgrade ZooKeeper to 3.7.2

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18942?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18942:

Affects Version/s: 3.4.0
   3.3.7

> Upgrade ZooKeeper to 3.7.2
> --
>
> Key: HADOOP-18942
> URL: https://issues.apache.org/jira/browse/HADOOP-18942
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0, 3.3.7
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7
>
>
> While HADOOP-18613 is proposing to upgrade ZooKeeper to 3.8, it will bring 
> dependency conflicts. Upgrading to ZooKeeper 3.7 could be alternative 
> short-term fix for addressing CVEs.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18949) upgrade maven dependency plugin due to security issue

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18949:

Hadoop Flags: Reviewed
Target Version/s: 3.4.0

> upgrade maven dependency plugin due to security issue
> -
>
> Key: HADOOP-18949
> URL: https://issues.apache.org/jira/browse/HADOOP-18949
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> https://github.com/advisories/GHSA-2f88-5hg8-9x2x



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18949) upgrade maven dependency plugin due to security issue

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18949:

Affects Version/s: 3.4.0

> upgrade maven dependency plugin due to security issue
> -
>
> Key: HADOOP-18949
> URL: https://issues.apache.org/jira/browse/HADOOP-18949
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> https://github.com/advisories/GHSA-2f88-5hg8-9x2x



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18954) Filter NaN values from JMX json interface

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18954:

Target Version/s: 3.4.0

> Filter NaN values from JMX json interface
> -
>
> Key: HADOOP-18954
> URL: https://issues.apache.org/jira/browse/HADOOP-18954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Bence Kosztolnik
>Assignee: Bence Kosztolnik
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> As we can see in this [Yarn 
> documentation|https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html]
>  beans can represent Float values as NaN. These values will be represented in 
> the JMX response JSON like:
> {noformat}
> ...
> "GuaranteedCapacity": NaN,
> ...
> {noformat}
> Based on the [JSON doc|https://www.json.org/] NaN is not a valid JSON token ( 
> however some of the parser libs can handle it ), so not every consumer can 
> parse values like these.
> To be able to parse NaN values, a new feature flag should be created.
> The new feature will replace the NaN values with 0.0 values.
> The feature is default turned off. It can be enabled with the 
> *hadoop.http.jmx.nan-filter.enabled* config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18956) Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and ZKSignerSecretProvider

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18956:

Component/s: common

> Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and 
> ZKSignerSecretProvider
> --
>
> Key: HADOOP-18956
> URL: https://issues.apache.org/jira/browse/HADOOP-18956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Zita Dombi
>Assignee: István Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client. That change also did changes 
> in ZKCuratorManager, so with that it is easy to set the SSL/TLS, for Yarn it 
> was done in YARN-11468.
> In DelegationTokenAuthenticationFilter currently we are using 
> CuratorFrameworkFactory, it'd be good to change it to use ZKCuratorManager 
> and with that we should support SSL/TLS enablement.
> *UPDATE*
> So as I investigated this a bit more, it wouldn't be so easy to move to using 
> ZKCuratorManager. 
> DelegationTokenAuthenticationFilter uses ZK from two places: in 
> ZKDelegationTokenSecretManager and in ZKSignerSecretProvider. In both places 
> it uses CuratorFrameworkFactory, but the attributes and creation 
> differentiates from ZKCuratorManager. 
> In ZKDelegationTokenSecretManager it would be easy to add the new config and 
> based on that create ZK with CuratorFrameworkFactory. But 
> ZKSignerSecretProvider is in hadoop-auth module and with my change it would 
> need hadoop-common, so it would introduce circular dependency between modules 
> 'hadoop-auth' and 'hadoop-common'. I'm still working on a straightforward 
> solution. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18957) Use StandardCharsets.UTF_8 constant

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18957:

Component/s: common

> Use StandardCharsets.UTF_8 constant
> ---
>
> Key: HADOOP-18957
> URL: https://issues.apache.org/jira/browse/HADOOP-18957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> * there are some places in the code that have to check for 
> UnsupportedCharsetException when explicitly using the charset name "UTF-8"
> * using StandardCharsets.UTF_8 is more efficient because the Java libs 
> usually have to look up the charsets when you provide it as String param 
> instead
> * also stop using Guava Charsets and use StandardCharsets



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18957) Use StandardCharsets.UTF_8 constant

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18957:

Target Version/s: 3.4.0

> Use StandardCharsets.UTF_8 constant
> ---
>
> Key: HADOOP-18957
> URL: https://issues.apache.org/jira/browse/HADOOP-18957
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> * there are some places in the code that have to check for 
> UnsupportedCharsetException when explicitly using the charset name "UTF-8"
> * using StandardCharsets.UTF_8 is more efficient because the Java libs 
> usually have to look up the charsets when you provide it as String param 
> instead
> * also stop using Guava Charsets and use StandardCharsets



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18956) Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and ZKSignerSecretProvider

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18956:

Target Version/s: 3.4.0

> Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and 
> ZKSignerSecretProvider
> --
>
> Key: HADOOP-18956
> URL: https://issues.apache.org/jira/browse/HADOOP-18956
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Zita Dombi
>Assignee: István Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client. That change also did changes 
> in ZKCuratorManager, so with that it is easy to set the SSL/TLS, for Yarn it 
> was done in YARN-11468.
> In DelegationTokenAuthenticationFilter currently we are using 
> CuratorFrameworkFactory, it'd be good to change it to use ZKCuratorManager 
> and with that we should support SSL/TLS enablement.
> *UPDATE*
> So as I investigated this a bit more, it wouldn't be so easy to move to using 
> ZKCuratorManager. 
> DelegationTokenAuthenticationFilter uses ZK from two places: in 
> ZKDelegationTokenSecretManager and in ZKSignerSecretProvider. In both places 
> it uses CuratorFrameworkFactory, but the attributes and creation 
> differentiates from ZKCuratorManager. 
> In ZKDelegationTokenSecretManager it would be easy to add the new config and 
> based on that create ZK with CuratorFrameworkFactory. But 
> ZKSignerSecretProvider is in hadoop-auth module and with my change it would 
> need hadoop-common, so it would introduce circular dependency between modules 
> 'hadoop-auth' and 'hadoop-common'. I'm still working on a straightforward 
> solution. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18956) Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and ZKSignerSecretProvider

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18956:

Affects Version/s: 3.4.0

> Zookeeper SSL/TLS support in ZKDelegationTokenSecretManager and 
> ZKSignerSecretProvider
> --
>
> Key: HADOOP-18956
> URL: https://issues.apache.org/jira/browse/HADOOP-18956
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Zita Dombi
>Assignee: István Fajth
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> HADOOP-18709 added support for Zookeeper to communicate with SSL/TLS enabled 
> in hadoop-common. With those changes we have the necessary parameters, that 
> we need to set to enable SSL/TLS in a ZK Client. That change also did changes 
> in ZKCuratorManager, so with that it is easy to set the SSL/TLS, for Yarn it 
> was done in YARN-11468.
> In DelegationTokenAuthenticationFilter currently we are using 
> CuratorFrameworkFactory, it'd be good to change it to use ZKCuratorManager 
> and with that we should support SSL/TLS enablement.
> *UPDATE*
> So as I investigated this a bit more, it wouldn't be so easy to move to using 
> ZKCuratorManager. 
> DelegationTokenAuthenticationFilter uses ZK from two places: in 
> ZKDelegationTokenSecretManager and in ZKSignerSecretProvider. In both places 
> it uses CuratorFrameworkFactory, but the attributes and creation 
> differentiates from ZKCuratorManager. 
> In ZKDelegationTokenSecretManager it would be easy to add the new config and 
> based on that create ZK with CuratorFrameworkFactory. But 
> ZKSignerSecretProvider is in hadoop-auth module and with my change it would 
> need hadoop-common, so it would introduce circular dependency between modules 
> 'hadoop-auth' and 'hadoop-common'. I'm still working on a straightforward 
> solution. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18957) Use StandardCharsets.UTF_8 constant

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18957:

Affects Version/s: 3.4.0

> Use StandardCharsets.UTF_8 constant
> ---
>
> Key: HADOOP-18957
> URL: https://issues.apache.org/jira/browse/HADOOP-18957
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> * there are some places in the code that have to check for 
> UnsupportedCharsetException when explicitly using the charset name "UTF-8"
> * using StandardCharsets.UTF_8 is more efficient because the Java libs 
> usually have to look up the charsets when you provide it as String param 
> instead
> * also stop using Guava Charsets and use StandardCharsets



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18969) S3A: AbstractS3ACostTest to clear bucket fs.s3a.create.performance flag

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan reassigned HADOOP-18969:
---

Assignee: Steve Loughran

> S3A:  AbstractS3ACostTest to clear bucket fs.s3a.create.performance flag
> 
>
> Key: HADOOP-18969
> URL: https://issues.apache.org/jira/browse/HADOOP-18969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> If there's a bucket-specific  fs.s3a.create.performance flag then the create 
> tests can fail as the costs are lower than expected. 
> trivial fix: add to the removeBaseAndBucketOverrides list



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18969) S3A: AbstractS3ACostTest to clear bucket fs.s3a.create.performance flag

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18969:

Target Version/s: 3.4.0
 Description: 
If there's a bucket-specific  fs.s3a.create.performance flag then the create 
tests can fail as the costs are lower than expected. 

trivial fix: add to the removeBaseAndBucketOverrides list

  was:

If there's a bucket-specific  fs.s3a.create.performance flag then the create 
tests can fail as the costs are lower than expected. 

trivial fix: add to the removeBaseAndBucketOverrides list


> S3A:  AbstractS3ACostTest to clear bucket fs.s3a.create.performance flag
> 
>
> Key: HADOOP-18969
> URL: https://issues.apache.org/jira/browse/HADOOP-18969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> If there's a bucket-specific  fs.s3a.create.performance flag then the create 
> tests can fail as the costs are lower than expected. 
> trivial fix: add to the removeBaseAndBucketOverrides list



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18964) Update plugin for SBOM generation to 2.7.10

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18964:

Target Version/s: 3.4.0

> Update plugin for SBOM generation to 2.7.10
> ---
>
> Key: HADOOP-18964
> URL: https://issues.apache.org/jira/browse/HADOOP-18964
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vinod Anandan
>Assignee: Vinod Anandan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Update the CycloneDX Maven plugin for SBOM generation to 2.7.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18954) Filter NaN values from JMX json interface

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18954:

Affects Version/s: 3.4.0

> Filter NaN values from JMX json interface
> -
>
> Key: HADOOP-18954
> URL: https://issues.apache.org/jira/browse/HADOOP-18954
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Bence Kosztolnik
>Assignee: Bence Kosztolnik
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> As we can see in this [Yarn 
> documentation|https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html]
>  beans can represent Float values as NaN. These values will be represented in 
> the JMX response JSON like:
> {noformat}
> ...
> "GuaranteedCapacity": NaN,
> ...
> {noformat}
> Based on the [JSON doc|https://www.json.org/] NaN is not a valid JSON token ( 
> however some of the parser libs can handle it ), so not every consumer can 
> parse values like these.
> To be able to parse NaN values, a new feature flag should be created.
> The new feature will replace the NaN values with 0.0 values.
> The feature is default turned off. It can be enabled with the 
> *hadoop.http.jmx.nan-filter.enabled* config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18982) Fix doc about loading native libraries

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18982:

Affects Version/s: 3.4.0

> Fix doc about loading native libraries
> --
>
> Key: HADOOP-18982
> URL: https://issues.apache.org/jira/browse/HADOOP-18982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.4.0
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> When we want load a native library libmyexample.so, the right way is to call 
> System.loadLibrary("myexample") rather than 
> System.loadLibrary("libmyexample.so").



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18982) Fix doc about loading native libraries

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18982:

Target Version/s: 3.4.0

> Fix doc about loading native libraries
> --
>
> Key: HADOOP-18982
> URL: https://issues.apache.org/jira/browse/HADOOP-18982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 3.4.0
>Reporter: Shuyan Zhang
>Assignee: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> When we want load a native library libmyexample.so, the right way is to call 
> System.loadLibrary("myexample") rather than 
> System.loadLibrary("libmyexample.so").



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19010) NullPointerException in Hadoop Credential Check CLI Command

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19010:

Component/s: common

> NullPointerException in Hadoop Credential Check CLI Command
> ---
>
> Key: HADOOP-19010
> URL: https://issues.apache.org/jira/browse/HADOOP-19010
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Anika Kelhanka
>Assignee: Anika Kelhanka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> *Description*: Hadoop's credential check throws {{NullPointerException}} when 
> alias not found.
> {code:bash}
> hadoop credential check "fs.gs.proxy.username" -provider 
> "jceks://file/usr/lib/hive/conf/hive.jceks" {code}
> Checking aliases for CredentialProvider: 
> jceks://file/usr/lib/hive/conf/hive.jceks
> Enter alias password: 
> java.lang.NullPointerException
> at
> org.apache.hadoop.security.alias.CredentialShell$CheckCommand.execute(CredentialShell.java:369)
> at org.apache.hadoop.tools.CommandShell.run(CommandShell.java:73)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:82)
> at 
> org.apache.hadoop.security.alias.CredentialShell.main(CredentialShell.java:529)}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18964) Update plugin for SBOM generation to 2.7.10

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18964:

Affects Version/s: 3.4.0

> Update plugin for SBOM generation to 2.7.10
> ---
>
> Key: HADOOP-18964
> URL: https://issues.apache.org/jira/browse/HADOOP-18964
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.4.0
>Reporter: Vinod Anandan
>Assignee: Vinod Anandan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> Update the CycloneDX Maven plugin for SBOM generation to 2.7.10



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18986) Upgrade Zookeeper to 3.8.2

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-18986:

Fix Version/s: (was: 3.4.0)

> Upgrade Zookeeper to 3.8.2
> --
>
> Key: HADOOP-18986
> URL: https://issues.apache.org/jira/browse/HADOOP-18986
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19020) Update the year to 2024

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19020:

Affects Version/s: 3.4.0

> Update the year to 2024
> ---
>
> Key: HADOOP-19020
> URL: https://issues.apache.org/jira/browse/HADOOP-19020
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.3, 3.3.7
>
>
> Update the year to 2024



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19020) Update the year to 2024

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-19020:

Component/s: common

> Update the year to 2024
> ---
>
> Key: HADOOP-19020
> URL: https://issues.apache.org/jira/browse/HADOOP-19020
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.3, 3.3.7
>
>
> Update the year to 2024



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7002) Wrong description of copyFromLocal and copyToLocal in documentation

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-7002:
---
Affects Version/s: 3.3.1
   3.4.0

> Wrong description of copyFromLocal and copyToLocal in documentation
> ---
>
> Key: HADOOP-7002
> URL: https://issues.apache.org/jira/browse/HADOOP-7002
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Jingguo Yao
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-7002.01.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> The descriptions of copyFromLocal and copyToLocal are wrong. 
> For copyFromLocal, the documentation says "Similar to put command, except 
> that the source is restricted to a local file reference." But from the source 
> code of FsShell.java, I can see that copyFromLocal is the sames as put. 
> For copyToLocal, the documentation says "Similar to get command, except that 
> the destination is restricted to a local file reference.". But from the 
> source code of FsShell.java, I can see that copyToLocal is the same as get.
> And this problem exist for both English and Chinese documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-7002) Wrong description of copyFromLocal and copyToLocal in documentation

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-7002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-7002:
---
Component/s: documentation

> Wrong description of copyFromLocal and copyToLocal in documentation
> ---
>
> Key: HADOOP-7002
> URL: https://issues.apache.org/jira/browse/HADOOP-7002
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Jingguo Yao
>Assignee: Andras Bokor
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-7002.01.patch
>
>   Original Estimate: 20m
>  Remaining Estimate: 20m
>
> The descriptions of copyFromLocal and copyToLocal are wrong. 
> For copyFromLocal, the documentation says "Similar to put command, except 
> that the source is restricted to a local file reference." But from the source 
> code of FsShell.java, I can see that copyFromLocal is the sames as put. 
> For copyToLocal, the documentation says "Similar to get command, except that 
> the destination is restricted to a local file reference.". But from the 
> source code of FsShell.java, I can see that copyToLocal is the same as get.
> And this problem exist for both English and Chinese documentation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11616) Remove workaround for Curator's ChildReaper requiring Guava 15+

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-11616:

Component/s: common

> Remove workaround for Curator's ChildReaper requiring Guava 15+
> ---
>
> Key: HADOOP-11616
> URL: https://issues.apache.org/jira/browse/HADOOP-11616
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0-alpha1
>Reporter: Robert Kanter
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> HADOOP-11612 adds a copy of Curator 2.7.1's {{ChildReaper}} and 
> {{TestChildReaper}} with minor modifications to work with Guava 11.0.2.  We 
> should remove these classes and update any usages to point to Curator itself 
> once we update Guava.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13144) Enhancing IPC client throughput via multiple connections per user

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-13144:

Affects Version/s: 3.3.5
   3.4.0

> Enhancing IPC client throughput via multiple connections per user
> -
>
> Key: HADOOP-13144
> URL: https://issues.apache.org/jira/browse/HADOOP-13144
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 3.4.0, 3.3.5
>Reporter: Jason Kace
>Assignee: Íñigo Goiri
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.5
>
> Attachments: HADOOP-13144-performance.patch, HADOOP-13144.000.patch, 
> HADOOP-13144.001.patch, HADOOP-13144.002.patch, HADOOP-13144.003.patch, 
> HADOOP-13144_overload_enhancement.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The generic IPC client ({{org.apache.hadoop.ipc.Client}}) utilizes a single 
> connection thread for each {{ConnectionId}}.  The {{ConnectionId}} is unique 
> to the connection's remote address, ticket and protocol.  Each ConnectionId 
> is 1:1 mapped to a connection thread by the client via a map cache.
> The result is to serialize all IPC read/write activity through a single 
> thread for a each user/ticket + address.  If a single user makes repeated 
> calls (1k-100k/sec) to the same destination, the IPC client becomes a 
> bottleneck.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13386) Upgrade Avro to 1.9.2

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-13386:

Affects Version/s: 3.4.0

> Upgrade Avro to 1.9.2
> -
>
> Key: HADOOP-13386
> URL: https://issues.apache.org/jira/browse/HADOOP-13386
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Java Developer
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Avro 1.8.x makes generated classes serializable which makes them much easier 
> to use with Spark. It would be great to upgrade Avro to 1.8.x
> Fix CVE-2021-43045



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13464) update GSON to 2.7+

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13464?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-13464:

Affects Version/s: 3.3.2
   3.4.0

> update GSON to 2.7+
> ---
>
> Key: HADOOP-13464
> URL: https://issues.apache.org/jira/browse/HADOOP-13464
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.4.0, 3.3.2
>Reporter: Sean Busbey
>Assignee: Igor Dvorzhak
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2, 3.2.4
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> our GSON version is from ~3 years ago. update to latest release.
> try to check release notes to see if this is incompatible.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-14698:

Affects Version/s: 3.3.1
   3.4.0

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.0.4, 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch, HADOOP-14698.08.patch, 
> HADOOP-14698.09.patch, HADOOP-14698.10.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14922) Build of Mapreduce Native Task module fails with unknown opcode "bswap"

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-14922:

Component/s: common

> Build of Mapreduce Native Task module fails with unknown opcode "bswap"
> ---
>
> Key: HADOOP-14922
> URL: https://issues.apache.org/jira/browse/HADOOP-14922
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-alpha3
> Environment: OS: Ubuntu 14.04
> Arch: PPC64LE
>Reporter: Anup Halarnkar
>Assignee: Anup Halarnkar
>Priority: Major
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
> Attachments: HADOOP-14922.01.patch
>
>
> [WARNING] /tmp/cckBBdQp.s: Assembler messages:
> [WARNING] /tmp/cckBBdQp.s:3127: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/cckBBdQp.s:3152: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/BlockCodec.cc.o] Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] /tmp/ccqRfBZp.s: Assembler messages:
> [WARNING] /tmp/ccqRfBZp.s:2098: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccqRfBZp.s:2123: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
> [WARNING] /tmp/cc50B5Mp.s: Assembler messages:
> [WARNING] /tmp/cc50B5Mp.s:3112: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/cc50B5Mp.s:3137: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/BlockCodec.cc.o] 
> Error 1
> [WARNING] make[2]: *** Waiting for unfinished jobs
> [WARNING] /tmp/ccobJqOY.s: Assembler messages:
> [WARNING] /tmp/ccobJqOY.s:2098: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccobJqOY.s:2123: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/Lz4Codec.cc.o] Error 1
> [WARNING] /tmp/ccdaQ1CY.s: Assembler messages:
> [WARNING] /tmp/ccdaQ1CY.s:2235: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccdaQ1CY.s:2249: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccRwHt5X.s: Assembler messages:
> [WARNING] /tmp/ccRwHt5X.s:2235: Error: unrecognized opcode: `bswap'
> [WARNING] /tmp/ccRwHt5X.s:2249: Error: unrecognized opcode: `bswap'
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask.dir/main/native/src/codec/SnappyCodec.cc.o] Error 1
> [WARNING] make[1]: *** [CMakeFiles/nativetask.dir/all] Error 2
> [WARNING] make[1]: *** Waiting for unfinished jobs
> [WARNING] make[2]: *** 
> [CMakeFiles/nativetask_static.dir/main/native/src/codec/SnappyCodec.cc.o] 
> Error 1
> [WARNING] make[1]: *** [CMakeFiles/nativetask_static.dir/all] Error 2
> [WARNING] make: *** [all] Error 2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-14698:

Component/s: common

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.0.4, 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch, HADOOP-14698.04.patch, HADOOP-14698.05.patch, 
> HADOOP-14698.06.patch, HADOOP-14698.07.patch, HADOOP-14698.08.patch, 
> HADOOP-14698.09.patch, HADOOP-14698.10.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15524) BytesWritable causes OOME when array size reaches Integer.MAX_VALUE

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-15524:

Affects Version/s: 3.4.0

> BytesWritable causes OOME when array size reaches Integer.MAX_VALUE
> ---
>
> Key: HADOOP-15524
> URL: https://issues.apache.org/jira/browse/HADOOP-15524
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 3.4.0
>Reporter: Joseph Smith
>Assignee: Joseph Smith
>Priority: Major
> Fix For: 3.4.0
>
>
> BytesWritable.setSize uses Integer.MAX_VALUE to initialize the internal 
> array.  On my environment, this causes an OOME
> {code:java}
> Exception in thread "main" java.lang.OutOfMemoryError: Requested array size 
> exceeds VM limit
> {code}
> byte[Integer.MAX_VALUE-2] must be used to prevent this error.
> Tested on OSX and CentOS 7 using Java version 1.8.0_131.
> I noticed that java.util.ArrayList contains the following
> {code:java}
> /**
>  * The maximum size of array to allocate.
>  * Some VMs reserve some header words in an array.
>  * Attempts to allocate larger arrays may result in
>  * OutOfMemoryError: Requested array size exceeds VM limit
>  */
> private static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
> {code}
>  
> BytesWritable.setSize should use something similar to prevent an OOME from 
> occurring.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16290) Enable RpcMetrics units to be configurable

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16290:

Affects Version/s: 3.3.2
   3.4.0

> Enable RpcMetrics units to be configurable
> --
>
> Key: HADOOP-16290
> URL: https://issues.apache.org/jira/browse/HADOOP-16290
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, metrics
>Affects Versions: 3.4.0, 3.3.2
>Reporter: Erik Krogen
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 8.5h
>  Remaining Estimate: 0h
>
> One resulting discussion from HADOOP-16266 was that it would be better for 
> the RPC metrics (processing time, queue time) to be in micro- or nanoseconds, 
> since milliseconds does not accurately capture the processing time of many 
> RPC operations.  HADOOP-16266 made some small changes in this direction, but 
> to keep the size of the patch down, we did not make it fully configurable. We 
> can complete that work here.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16054) Update Dockerfile to use Bionic

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16054:

Affects Version/s: 3.3.1
   3.4.0

> Update Dockerfile to use Bionic
> ---
>
> Key: HADOOP-16054
> URL: https://issues.apache.org/jira/browse/HADOOP-16054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16524:

Affects Version/s: 3.3.1
   3.4.0

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16586) ITestS3GuardFsck, others fails when run using a local metastore

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16586:

Affects Version/s: 3.4.0

> ITestS3GuardFsck, others fails when run using a local metastore
> ---
>
> Key: HADOOP-16586
> URL: https://issues.apache.org/jira/browse/HADOOP-16586
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Siddharth Seth
>Assignee: Masatake Iwasaki
>Priority: Major
> Fix For: 3.4.0
>
>
> Most of these tests fail if running against a local metastore with a 
> ClassCastException.
> Not sure if these tests are intended to work with dynamo only. The fix 
> (either ignore in case of other metastores or fix the test) would depend on 
> the original intent.
> {code}
> ---
> Test set: org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
> ---
> Tests run: 12, Failures: 0, Errors: 11, Skipped: 1, Time elapsed: 34.653 s 
> <<< FAILURE! - in org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck
> testIDetectParentTombstoned(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)
>   Time elapsed: 3.237 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectParentTombstoned(ITestS3GuardFsck.java:190)
> testIDetectDirInS3FileInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 1.827 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectDirInS3FileInMs(ITestS3GuardFsck.java:214)
> testIDetectLengthMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
> Time elapsed: 2.819 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectLengthMismatch(ITestS3GuardFsck.java:311)
> testIEtagMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  Time 
> elapsed: 2.832 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIEtagMismatch(ITestS3GuardFsck.java:373)
> testIDetectFileInS3DirInMs(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 2.752 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectFileInS3DirInMs(ITestS3GuardFsck.java:238)
> testIDetectModTimeMismatch(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 4.103 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectModTimeMismatch(ITestS3GuardFsck.java:346)
> testIDetectNoMetadataEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck) 
>  Time elapsed: 3.017 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoMetadataEntry(ITestS3GuardFsck.java:113)
> testIDetectNoParentEntry(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
> Time elapsed: 2.821 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testIDetectNoParentEntry(ITestS3GuardFsck.java:136)
> testINoEtag(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  Time elapsed: 
> 4.493 s  <<< ERROR!
> java.lang.ClassCastException: 
> org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore cannot be cast to 
> org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore
>   at 
> org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck.testINoEtag(ITestS3GuardFsck.java:403)
> testIDetectParentIsAFile(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardFsck)  
> Time elapsed: 2.782 s  <<< E

[jira] [Updated] (HADOOP-16524) Automatic keystore reloading for HttpServer2

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16524:

Component/s: common

> Automatic keystore reloading for HttpServer2
> 
>
> Key: HADOOP-16524
> URL: https://issues.apache.org/jira/browse/HADOOP-16524
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Kihwal Lee
>Assignee: Borislav Iordanov
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16524.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> Jetty 9 simplified reloading of keystore.   This allows hadoop daemon's SSL 
> cert to be updated in place without having to restart the service.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16870) Use spotbugs-maven-plugin instead of findbugs-maven-plugin

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16870:

Affects Version/s: 3.3.1
   3.4.0

> Use spotbugs-maven-plugin instead of findbugs-maven-plugin
> --
>
> Key: HADOOP-16870
> URL: https://issues.apache.org/jira/browse/HADOOP-16870
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>
> Attachments: HADOOP-16870.branch-2.10.001.patch
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> findbugs-maven-plugin is no longer maintained. Use spotbugs-maven-plugin 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16768:

Affects Version/s: 3.3.1
   3.4.0

> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
>Affects Versions: 3.3.1, 3.4.0
> Environment: X86/Aarch64
> OS: Ubuntu 18.04, CentOS 8
> Snappy 1.1.7
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.F

[jira] [Updated] (HADOOP-16748) Migrate to Python 3 and upgrade Yetus to 0.13.0

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16748:

Affects Version/s: 3.3.1
   3.4.0

> Migrate to Python 3 and upgrade Yetus to 0.13.0
> ---
>
> Key: HADOOP-16748
> URL: https://issues.apache.org/jira/browse/HADOOP-16748
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16748) Migrate to Python 3 and upgrade Yetus to 0.13.0

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16748:

Component/s: common

> Migrate to Python 3 and upgrade Yetus to 0.13.0
> ---
>
> Key: HADOOP-16748
> URL: https://issues.apache.org/jira/browse/HADOOP-16748
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16888) [JDK11] Support JDK11 in the precommit job

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16888:

Affects Version/s: 3.4.0

> [JDK11] Support JDK11 in the precommit job
> --
>
> Key: HADOOP-16888
> URL: https://issues.apache.org/jira/browse/HADOOP-16888
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> Install openjdk-11 in the Dockerfile and use Yetus multijdk plugin to run 
> precommit job in both jdk8 and jdk11.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16908) Prune Jackson 1 from the codebase and restrict it's usage for future

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16908:

Component/s: common

> Prune Jackson 1 from the codebase and restrict it's usage for future
> 
>
> Key: HADOOP-16908
> URL: https://issues.apache.org/jira/browse/HADOOP-16908
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> The jackson 1 code has silently creeped into the Hadoop codebase again. We 
> should prune them out.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16908) Prune Jackson 1 from the codebase and restrict it's usage for future

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16908?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16908:

Affects Version/s: 3.4.0

> Prune Jackson 1 from the codebase and restrict it's usage for future
> 
>
> Key: HADOOP-16908
> URL: https://issues.apache.org/jira/browse/HADOOP-16908
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 10m
>  Remaining Estimate: 0h
>
> The jackson 1 code has silently creeped into the Hadoop codebase again. We 
> should prune them out.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16948) ABFS: Support infinite lease dirs

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16948:

Affects Version/s: 3.3.1
   3.4.0

> ABFS: Support infinite lease dirs
> -
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16948) ABFS: Support infinite lease dirs

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16948:

Component/s: common

> ABFS: Support infinite lease dirs
> -
>
> Key: HADOOP-16948
> URL: https://issues.apache.org/jira/browse/HADOOP-16948
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Billie Rinaldi
>Assignee: Billie Rinaldi
>Priority: Minor
>  Labels: abfsactive, pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 10h 50m
>  Remaining Estimate: 0h
>
> This would allow some directories to be configured as single writer 
> directories. The ABFS driver would obtain a lease when creating or opening a 
> file for writing and would automatically renew the lease and release the 
> lease when closing the file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17033) Update commons-codec from 1.11 to 1.14

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17033:

Component/s: common

> Update commons-codec from 1.11 to 1.14
> --
>
> Key: HADOOP-17033
> URL: https://issues.apache.org/jira/browse/HADOOP-17033
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.4.0
>
>
> We are on commons-codec 1.11 which is slightly outdated. The latest is 1.14. 
> We should update it if it's not too much of a hassle.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17033) Update commons-codec from 1.11 to 1.14

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17033:

Affects Version/s: 3.4.0

> Update commons-codec from 1.11 to 1.14
> --
>
> Key: HADOOP-17033
> URL: https://issues.apache.org/jira/browse/HADOOP-17033
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Fix For: 3.4.0
>
>
> We are on commons-codec 1.11 which is slightly outdated. The latest is 1.14. 
> We should update it if it's not too much of a hassle.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16951) Tidy Up Text and ByteWritables Classes

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-16951:

Affects Version/s: 3.4.0

> Tidy Up Text and ByteWritables Classes
> --
>
> Key: HADOOP-16951
> URL: https://issues.apache.org/jira/browse/HADOOP-16951
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Minor
> Fix For: 3.4.0
>
>
> # Remove superfluous code
>  # Remove superfluous comments
>  # Checkstyle fixes
>  # Remove methods that simply call {{super}}.method()
>  # Use Java 8 facilities to streamline code where applicable
>  # Simplify and unify some of the constructs between the two classes
>  
> The one meaningful change is that I am suggesting that the expanding of the 
> arrays be 1.5x instead of 2x per expansion.  I pulled this idea from open JDK.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17115:

Component/s: common

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and 
> hadoop-tools
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
>  

[jira] [Updated] (HADOOP-17090) Increase precommit job timeout from 5 hours to 20 hours

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17090:

Affects Version/s: 3.3.1
   3.4.0

> Increase precommit job timeout from 5 hours to 20 hours
> ---
>
> Key: HADOOP-17090
> URL: https://issues.apache.org/jira/browse/HADOOP-17090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 2.9.3, 3.2.2, 2.10.1, 3.3.1, 3.4.0
>
>
> Now we frequently increase the timeout for testing and undo the change before 
> committing.
> * https://github.com/apache/hadoop/pull/2026
> * https://github.com/apache/hadoop/pull/2051
> * https://github.com/apache/hadoop/pull/2012
> * https://github.com/apache/hadoop/pull/2098
> * and more...
> I'd like to increase the timeout by default to reduce the work.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17084) Update Dockerfile_aarch64 to use Bionic

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17084:

Affects Version/s: 3.3.1
   3.4.0

> Update Dockerfile_aarch64 to use Bionic
> ---
>
> Key: HADOOP-17084
> URL: https://issues.apache.org/jira/browse/HADOOP-17084
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.3.1, 3.4.0
>Reporter: RuiChen
>Assignee: zhaorenhai
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> Dockerfile for x86 have been updated to apply Ubuntu Bionic, JDK11 and other 
> changes, we should make Dockerfile for aarch64 following these changes, keep 
> same behavior.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17133) Implement HttpServer2 metrics

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17133:

Affects Version/s: 3.4.0

> Implement HttpServer2 metrics
> -
>
> Key: HADOOP-17133
> URL: https://issues.apache.org/jira/browse/HADOOP-17133
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: httpfs, kms
>Affects Versions: 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> I'd like to collect metrics (number of connections, average response time, 
> etc...) from HttpFS and KMS but there are no metrics for HttpServer2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17147) Dead link in hadoop-kms/index.md.vm

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17147:

Affects Version/s: 3.3.1
   3.4.0

> Dead link in hadoop-kms/index.md.vm
> ---
>
> Key: HADOOP-17147
> URL: https://issues.apache.org/jira/browse/HADOOP-17147
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Akira Ajisaka
>Assignee: Xieming Li
>Priority: Minor
>  Labels: newbie
> Fix For: 3.2.2, 3.3.1, 3.4.0
>
> Attachments: HADOOP-17147.000.patch
>
>
> There is a dead link 
> (https://hadoop.apache.org/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html)
>  in 
> https://hadoop.apache.org/docs/r3.3.0/hadoop-kms/index.html#KMS_over_HTTPS_.28SSL.29
> The link should be 
> https://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-common/CredentialProviderAPI.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2024-01-26 Thread Shilun Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shilun Fan updated HADOOP-17115:

Affects Version/s: 3.4.0

> Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and 
> hadoop-tools
> ---
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.4.0
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";),
> 87 Sets.newHashSet(providers[0].getKMSUrl()));
> 95 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 98 Sets.newHashSet(providers[0].getKMSUrl(),
> 108 
> assertEquals(Sets.newHashSet("http://host1:9600/kms/foo/v1/";,
> 111 Sets.newHashSet(providers[0].getKMSUrl(),
> org.apache.hadoop.crypto.key.kms.server  (1 usage found)
> KMSAudit.java  (1 usage found)
> 59 static final Set AGGREGATE_OPS_WHITELIST = 
> Sets.newHashSet(
> org.apache.hadoop.fs.s3a  (1 usage found)
> TestS3AAWSCredentialsProvider.java  (1 usage found)
> testFallbackToDefaults()  (1 usage found)
> 183 Sets.newHashSet());
> org.apache.hadoop.fs.s3a.auth  (1 usage found)
> AssumedRoleCredentialProvider.java  (1 usage found)
> AssumedRoleCredentialProvider(URI, Configuration)  (1 usage found)
> 113 Sets.newHashSet(this.getClass()));
> org.apache.hadoop.fs.s3a.commit.integration  (1 usage found)
> ITestS3ACommitterMRJob.java  (1 usage found)
> test_200_execute()  (1 usage found)
> 232 Set expectedKeys = Sets.newHashSet();
> org.apache.hadoop.fs.s3a.commit.staging  (5 usages found)
> TestStagingCommitter.java  (3 usages found)
> testSingleTaskMultiFileCommit()  (1 usage found)
> 341 Set keys = Sets.newHashSet();
> runTasks(JobContext, int, int)  (1 usage found)
> 603 Set uploads = Sets.newHashSet();
> commitTask(StagingCommitter, TaskAttemptContext, int)  (1 usage 
> found)
> 640 Set files = Sets.newHashSet();
> TestStagingPartitionedTaskCommit.java  (2 usages found)
> verifyFilesCreated(PartitionedStagingCommitter)  (1 usage found)
> 148 Set files = Sets.newHashSet();
> buildExpectedList(StagingCommitter)  (1 usage found)
> 188 Set expected = Sets.newHashSet();
> org.apache.hadoop.hdfs  (5 usages found)
> DFSUtil.java  (2 usages found)
> getNNServiceRpcAddressesForCluster(Configuration)  (1 usage found)
> 615 Set availableNameServices = Sets.newHashSet(conf
> getNNLifelineRpcAddressesForCluster(Configuration)  (1 usage 
> found)
> 660 Set availableNameServices = Sets.newHashSet(conf
> MiniDFSCluster.java  (1 usage found)
> 597 private Set fileSystems = Sets.newHashSet();
> TestDFSUtil.java  (2 usages found)
> testGetNNServiceRpcAddressesForNsIds()  (2 usages found)
> 1046 assertEquals(Sets.newHashSet("nn1"), internal);
> 1049 assertEquals(Sets.newHashSet("nn1", "nn2"), all);
> org.apache.hadoop.hdfs.net  (5 usages found)
> TestDFSNetworkTopology.java  (5 usages found)
> testChooseRandomWithStorageType()  (4 usages found)
> 277 Sets.newHashSet("host2", "host4", "host5", "host6");
> 278 Set archiveUnderL1 = Sets.newHashSet("host1", 
> "host3");
> 279 Set ramdiskUnderL1 = Sets.newHashSet("host7");
> 280 Set ssdUnderL1 = Sets.newHashSet("host8");
> testChooseRandomWithStorageTypeWithExcluded()  (1 usage found)
> 363 Set expectedSet = 

  1   2   3   >