[jira] [Commented] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-29 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17869243#comment-17869243
 ] 

ASF GitHub Bot commented on HADOOP-19237:
-

hadoop-yetus commented on PR #6961:
URL: https://github.com/apache/hadoop/pull/6961#issuecomment-2255041272

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 33s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  16m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 41s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  25m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   8m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  32m 40s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/4/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 2 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  68m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  32m 11s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/4/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |  18m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  16m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 45s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  16m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   8m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  hadoop-project has no data from 
spotbugs  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  
hadoop-client-modules/hadoop-client-runtime has no data from spotbugs  |
   | -1 :x: |  shadedclient  |  68m 52s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 764m 11s |  |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 20s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1158m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6961 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle 
shellcheck shelldocs |
   | uname | Linux ac71c347cfa4 5.15.0-94

[jira] [Updated] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-28 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HADOOP-19235:
-
Fix Version/s: HDFS-17531
   (was: 3.5.0)

> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: HDFS-17531
>
> Attachments: HADOOP-19235.patch
>
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2024-07-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17869143#comment-17869143
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

shameersss1 commented on PR #6884:
URL: https://github.com/apache/hadoop/pull/6884#issuecomment-2254365038

   @steveloughran - Gentle reminder for the review
   Thanks




> AWS SDK V2 - Implement CSE
> --
>
> Key: HADOOP-18708
> URL: https://issues.apache.org/jira/browse/HADOOP-18708
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> S3 Encryption client for SDK V2 is now available, so add client side 
> encryption back in. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19240) Enhance FileSystem.Cache to honor a user defined field

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19240:
--
Description: Add a new field in FileSystem.Cache.Key to affect hashCode() 
and equals(). This field could be specified when contructing a Key

> Enhance FileSystem.Cache to honor a user defined field
> --
>
> Key: HADOOP-19240
> URL: https://issues.apache.org/jira/browse/HADOOP-19240
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Major
> Fix For: 3.3.4
>
>
> Add a new field in FileSystem.Cache.Key to affect hashCode() and equals(). 
> This field could be specified when contructing a Key



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19240) Enhance FileSystem.Cache to honor a user defined fields

2024-07-27 Thread Xiang Li (Jira)
Xiang Li created HADOOP-19240:
-

 Summary: Enhance FileSystem.Cache to honor a user defined fields
 Key: HADOOP-19240
 URL: https://issues.apache.org/jira/browse/HADOOP-19240
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 3.3.4
Reporter: Xiang Li
 Fix For: 3.3.4






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor security token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Description: 
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

The current cache in FileSystem is a 
[HashMap|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3635C1-L3635C62],
 and its key honors scheme, authority (like 
[user@host:port|https://en.wikipedia.org/wiki/Uniform_Resource_Identifier#Syntax]),
 ugi and a unique long for its [hash 
code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8].
 And among those 4 fields, only "scheme" and "authority" could be controlled 
externally.

That results in a wrong case like: A FileSystem entry in the cache was created 
with schemeA + authorityA, and with read + write access, and an expiration. 
Later, an API to get FileSystem comes still using schemeA + authorityA, but 
with less access (maybe read only), or it already expires, that FileSystem 
entry in the cache is honored by mistake, while no new FilleSystem is created. 
It does not lead to a security issue, but subsequent calls (may to read the 
file) will be rejected with 403 by the remote stoage.

 

 

 

 

  was:
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

The current cache in FileSystem is a 
[HashMap|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3635C1-L3635C62],
 and its key honors scheme, authority (like 
[user@host:port|https://en.wikipedia.org/wiki/Uniform_Resource_Identifier#Syntax]),
 ugi and a unique long for its [hash 
code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8].
 And among those 4 fields, only "scheme" and "authority" could be controlled 
externally.

That results in a wrong case like: A FileSystem entry in the cache was created 
with schemeA + authorityA, and with read + write access, and an expiration. 
Later, an API to get FileSystem comes still using schemeA + authorityA,  but 
with less access (maybe read only) comes, or it already expires, that 
FileSystem entry in the cache is honored by mistake, while no new FilleSystem 
is created. It does not lead to a security issue, but subsequent calls (may 
read the file) will be rejected with 403 by the remote stoage.

 

 

 

 


> Enhance FileSystem to honor security token and expiration in its cache
> --
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>
> We have an online service which uses Hadoop FileSystem to load files from 
> Clould storage.
> The current cache in FileSystem is a 
> [HashMap|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3635C1-L3635C62],
>  and its key honors scheme, authority (like 
> [user@host:port|https://en.wikipedia.org/wiki/Uniform_Resource_Identifier#Syntax]),
>  ugi and a unique long for its [hash 
> code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8].
>  And among those 4 fields, only "scheme" and "authority" could be controlled 
> externally.
> That results in a wrong case like: A FileSystem entry in the cache was 
> created with schemeA + authorityA, and with read + write access, and an 
> expiration. Later, an API to get FileSystem comes still using schemeA + 
> authorityA, but with less access (maybe read only), or it already expires, 
> that FileSystem entry in the cache is honored by mistake, while no new 
> FilleSystem is created. It does not lead to a security issue, but subsequent 
> calls (may to read the file) will be rejected with 403 by the remote stoage.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor security token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Description: 
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

The currently cache in FileSystem is a HashMap, and its key honors scheme, 
authority (like 
[user@host:port|https://en.wikipedia.org/wiki/Uniform_Resource_Identifier#Syntax]),
 ugi and a unique long  for its [hash 
code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8].
 

 

 

  was:
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

The currently cache in FileSystem is a HashMap, and its key only honors scheme, 
authority, ugi and a unique long (like user@host:port) for its [hash 
code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8]

 


> Enhance FileSystem to honor security token and expiration in its cache
> --
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>
> We have an online service which uses Hadoop FileSystem to load files from 
> Clould storage.
> The currently cache in FileSystem is a HashMap, and its key honors scheme, 
> authority (like 
> [user@host:port|https://en.wikipedia.org/wiki/Uniform_Resource_Identifier#Syntax]),
>  ugi and a unique long  for its [hash 
> code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8].
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor security token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Description: 
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

The currently cache in FileSystem is a HashMap, and its key only honors scheme, 
authority, ugi and a unique long (like user@host:port) for its [hash 
code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8]

 

  was:
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

The currently cache in FileSystem is a HashMap, and its key only honors scheme 
and authority (like user@host:port) for its ([hash 
code](https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8)

 


> Enhance FileSystem to honor security token and expiration in its cache
> --
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>
> We have an online service which uses Hadoop FileSystem to load files from 
> Clould storage.
> The currently cache in FileSystem is a HashMap, and its key only honors 
> scheme, authority, ugi and a unique long (like user@host:port) for its [hash 
> code|https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor security token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Description: 
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

The currently cache in FileSystem is a HashMap, and its key only honors scheme 
and authority (like user@host:port) for its ([hash 
code](https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8)

 

  was:
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

 


> Enhance FileSystem to honor security token and expiration in its cache
> --
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>
> We have an online service which uses Hadoop FileSystem to load files from 
> Clould storage.
> The currently cache in FileSystem is a HashMap, and its key only honors 
> scheme and authority (like user@host:port) for its ([hash 
> code](https://github.com/apache/hadoop/blob/4525c7e35ea22d7a6350b8af10eb8d2ff68376e7/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L3891C1-L3894C8)
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor security token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Description: 
We have an online service which uses Hadoop FileSystem to load files from 
Clould storage.

 

  was:We have an online service which uses Hadoop FileSystem to load files on 
Clould storage.


> Enhance FileSystem to honor security token and expiration in its cache
> --
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>
> We have an online service which uses Hadoop FileSystem to load files from 
> Clould storage.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor security token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Summary: Enhance FileSystem to honor security token and expiration in its 
cache  (was: Enhance FileSystem to honor token and expiration in its cache)

> Enhance FileSystem to honor security token and expiration in its cache
> --
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>
> We have an online service which uses Hadoop FileSystem to load files on 
> Clould storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Description: We have an online service which uses Hadoop FileSystem to load 
files on Clould storage.

> Enhance FileSystem to honor token and expiration in its cache
> -
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>
> We have an online service which uses Hadoop FileSystem to load files on 
> Clould storage.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19239) Enhance FileSystem to honor token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiang Li updated HADOOP-19239:
--
Affects Version/s: 3.3.4
   (was: 3.3.6)

> Enhance FileSystem to honor token and expiration in its cache
> -
>
> Key: HADOOP-19239
> URL: https://issues.apache.org/jira/browse/HADOOP-19239
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 3.3.4
>Reporter: Xiang Li
>Priority: Critical
> Fix For: 3.3.4
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19239) Enhance FileSystem to honor token and expiration in its cache

2024-07-27 Thread Xiang Li (Jira)
Xiang Li created HADOOP-19239:
-

 Summary: Enhance FileSystem to honor token and expiration in its 
cache
 Key: HADOOP-19239
 URL: https://issues.apache.org/jira/browse/HADOOP-19239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 3.3.6
Reporter: Xiang Li
 Fix For: 3.3.4






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17869065#comment-17869065
 ] 

ASF GitHub Bot commented on HADOOP-19237:
-

hadoop-yetus commented on PR #6961:
URL: https://github.com/apache/hadoop/pull/6961#issuecomment-2253726357

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  18m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  26m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   8m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 20s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  32m 32s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/3/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 2 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  69m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  32m 25s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/3/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |  17m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  17m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  16m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 43s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  19m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   7m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  hadoop-project has no data from 
spotbugs  |
   | +0 :ok: |  spotbugs  |   0m 21s |  |  
hadoop-client-modules/hadoop-client-runtime has no data from spotbugs  |
   | -1 :x: |  shadedclient  |  69m 40s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 663m 43s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/3/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1066m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Unreaped Processes | root:2 |
   | Failed junit tests | hadoop.registry.server.dns.TestSecureRegistryDNS |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/3/artifact/out/Dockerfile

[jira] [Commented] (HADOOP-19131) WrappedIO to export modern filesystem/statistics APIs in a reflection friendly form

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17869057#comment-17869057
 ] 

ASF GitHub Bot commented on HADOOP-19131:
-

hadoop-yetus commented on PR #6686:
URL: https://github.com/apache/hadoop/pull/6686#issuecomment-2253690393

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 27s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   6m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   5m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  34m 33s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  16m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  16m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 27s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6686/31/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 59 new + 66 unchanged - 3 fixed = 125 total (was 
69)  |
   | +1 :green_heart: |  mvnsite  |   5m 59s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 12s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6686/31/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2
 with JDK Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  javadoc  |   5m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |  10m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 53s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 226m 33s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 21s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 55s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 55s |  |  hadoop-aliyun in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 14s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 510m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker

[jira] [Commented] (HADOOP-19231) add JacksonUtil to centralise some code

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17869051#comment-17869051
 ] 

ASF GitHub Bot commented on HADOOP-19231:
-

hadoop-yetus commented on PR #6953:
URL: https://github.com/apache/hadoop/pull/6953#issuecomment-2253660606

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  9s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  40m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   5m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  20m  5s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  17m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  16m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | -1 :x: |  spotbugs  |   1m  5s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 24s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/8/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  41m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  11m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  19m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  17m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 50s |  |  root: The patch generated 
0 new + 635 unchanged - 6 fixed = 635 total (was 641)  |
   | +1 :green_heart: |  mvnsite  |  18m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |  15m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  15m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |  33m 36s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 35s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 48s |  |  hadoop-kms in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 41s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 260m 48s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   6m  5s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 53s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 57s |  |  
hadoop-yarn-server-applicationhistoryservice in the patch passed.  |
   | +1 :green_heart: |  unit  |   2m  5s |  |  
hadoop-yarn-server

[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868984#comment-17868984
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

steveloughran commented on code in PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#discussion_r1693359752


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java:
##
@@ -228,15 +228,15 @@ protected Map, RetryPolicy> 
createExceptionMap() {
 // throttled requests are can be retried, always
 policyMap.put(AWSServiceThrottledException.class, throttlePolicy);
 
-// Status 5xx error code is an immediate failure
+// Status 5xx error code has historically been treated as an immediate 
failure
 // this is sign of a server-side problem, and while
 // rare in AWS S3, it does happen on third party stores.
 // (out of disk space, etc).
 // by the time we get here, the aws sdk will have
-// already retried.
+// already retried, if it is configured to retry exceptions.
 // there is specific handling for some 5XX codes (501, 503);
 // this is for everything else
-policyMap.put(AWSStatus500Exception.class, fail);
+policyMap.put(AWSStatus500Exception.class, retryAwsClientExceptions);

Review Comment:
   see the full comment below. along with that I really don't like looking in 
error strings, way too brittle for production code. Even in tests I like to 
share the text across production and test classes as constants. 
   
   (yes, I know about org.apache.hadoop.fs.s3a.impl.ErrorTranslation 
doesn't mean I like it)





> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868982#comment-17868982
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

steveloughran commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2253149858

   @shameersss1 
   I really don't know what best to do here. 
   
   We have massively cut back on the number of retries which take place in the 
V2 SDK compared to V1; even though we have discussed in the past turning it off 
completely and handling it all ourselves. However, that would break things the 
transfer manager does in separate threads.
   
   The thing is, I do not know how often we see 500 errors against AWS S3 
stores (rather than third party ones with unrecoverable issues) -and now we 
have seen them I don't know what the right policy should be. The only 
documentation on what to do seems more focused on 503s, and doesn't provide any 
hints about why a 500 could happen or what to do other than "keep trying maybe 
it'll go away": https://repost.aws/knowledge-center/http-5xx-errors-s3 . I do 
suspect it is very rare -otherwise the AWS team might have noticed their lack 
of resilience here, and we would've found it during our own testing. Any 500 
error at any point other than multipart uploads probably gets recovered from 
nicely so that could've been a background noise of these which we have never 
noticed before. s3a FS stats will now track these, which may be informative.
   
   I don't want to introduce another configuration switch if possible because 
that at more to documentation testing maintenance et cetera. One thing I was 
considering is should we treat this exactly the same as a throttling exception 
which has its own configuration settings for retries?
   
   Anyway, if you could talk to your colleagues and make some suggestions based 
on real knowledge of what can happen that would be really nice. Note that we 
are treating 500 as idempotent, the way we do with all the other failures even 
though from a distributed computing purism perspective it is not in fact true.
   
   Not looked at the other comments yet; will do later. Based on a code 
walk-through with Mukud, Harshit and Saikat, I've realised we should make 
absolutely sure that the stream providing a subset of file fails immediately if 
the read() goes past the allocated space. With tests, obviously.




> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
>     URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868908#comment-17868908
 ] 

ASF GitHub Bot commented on HADOOP-19237:
-

hadoop-yetus commented on PR #6961:
URL: https://github.com/apache/hadoop/pull/6961#issuecomment-2252380215

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  17m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  22m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   9m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   8m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  32m 43s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/2/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 2 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  71m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  35m  3s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/2/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |  18m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  17m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 33s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  19m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   9m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   9m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 19s |  |  hadoop-project has no data from 
spotbugs  |
   | +0 :ok: |  spotbugs  |   0m 20s |  |  
hadoop-client-modules/hadoop-client-runtime has no data from spotbugs  |
   | -1 :x: |  shadedclient  |  73m 37s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 720m 15s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/2/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   1m 22s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1132m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Unreaped Processes | root:2 |
   | Failed junit tests | hadoop.registry.server.dns.TestSecureRegistryDNS |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/2/artifact/out/Dockerfile

[jira] [Commented] (HADOOP-19187) ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting both DFS and Blob Endpoint

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868887#comment-17868887
 ] 

ASF GitHub Bot commented on HADOOP-19187:
-

rakeshadr commented on PR #6879:
URL: https://github.com/apache/hadoop/pull/6879#issuecomment-2252239756

   @anujmodi2021 Please update PR subjectline and description reflecting your 
changes. Thanks!




> ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting both DFS and 
> Blob Endpoint
> ---
>
> Key: HADOOP-19187
> URL: https://issues.apache.org/jira/browse/HADOOP-19187
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Services support two different set of APIs.
> Blob: 
> [https://learn.microsoft.com/en-us/rest/api/storageservices/blob-service-rest-api]
>  
> DFS: 
> [https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/operation-groups]
>  
> As per the plan in HADOOP-19179, this task enables ABFS Driver to work with 
> both set of APIs as per the requirement.
> Scope of this task is to refactor the ABfsClient so that ABFSStore can choose 
> to interact with the client it wants based on the endpoint configured by user.
> The blob endpoint support will remain "Unsupported" until the whole code is 
> checked-in and well tested.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19187) ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting both DFS and Blob Endpoint

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868886#comment-17868886
 ] 

ASF GitHub Bot commented on HADOOP-19187:
-

rakeshadr commented on code in PR #6879:
URL: https://github.com/apache/hadoop/pull/6879#discussion_r1692432341


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -434,10 +445,70 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
 }
   }
 
+  public AbfsConfiguration(final Configuration rawConfig, String accountName)
+  throws IllegalAccessException, IOException {
+this(rawConfig, accountName, AbfsServiceType.DFS);
+  }
+
   public Trilean getIsNamespaceEnabledAccount() {
 return Trilean.getTrilean(isNamespaceEnabledAccount);
   }
 
+  /**
+   * Returns the service type to be used based on the filesystem configuration.
+   * Precedence is given to service type configured for FNS Accounts using
+   * "fs.azure.fns.account.service.type". If not configured, then the service
+   * type identified from url used to initialize filesystem will be used.
+   * @return the service type.
+   */
+  public AbfsServiceType getFsConfiguredServiceType() {
+return getEnum(FS_AZURE_FNS_ACCOUNT_SERVICE_TYPE, fsConfiguredServiceType);
+  }
+
+  /**
+   * Returns the service type configured for FNS Accounts to override the
+   * service type identified by URL used to initialize the filesystem.
+   * @return the service type.
+   */
+  public AbfsServiceType getConfiguredServiceTypeForFNSAccounts() {
+return getEnum(FS_AZURE_FNS_ACCOUNT_SERVICE_TYPE, null);
+  }
+
+  /**
+   * Returns the service type to be used for Ingress Operations irrespective 
of account type.
+   * Default value is the same as the service type configured for the file 
system.
+   * @return the service type.
+   */
+  public AbfsServiceType getIngressServiceType() {
+return getEnum(FS_AZURE_INGRESS_SERVICE_TYPE, 
getFsConfiguredServiceType());
+  }
+
+  public boolean isDfsToBlobFallbackEnabled() {
+return isDfsToBlobFallbackEnabled;
+  }
+
+  /**
+   * Checks if the service type configured is valid for account type used.
+   * HNS Enabled accounts cannot have service type as BLOB.
+   * @param isHNSEnabled Flag to indicate if HNS is enabled for the account.
+   * @throws InvalidConfigurationValueException if the service type is invalid.
+   */
+  public void validateConfiguredServiceType(boolean isHNSEnabled)
+  throws InvalidConfigurationValueException {
+// Todo: [FnsOverBlob] - Remove this check, Failing FS Init with Blob 
Endpoint Until FNS over Blob is ready.
+if (getFsConfiguredServiceType() == AbfsServiceType.BLOB) {
+  throw new InvalidConfigurationValueException(FS_DEFAULT_NAME_KEY,
+  "Blob Endpoint Support not yet available");
+}
+if (isHNSEnabled && getConfiguredServiceTypeForFNSAccounts() == 
AbfsServiceType.BLOB) {
+  throw new InvalidConfigurationValueException(
+  FS_AZURE_FNS_ACCOUNT_SERVICE_TYPE, "Cannot be BLOB for HNS Account");
+} else if (isHNSEnabled && fsConfiguredServiceType == 
AbfsServiceType.BLOB) {
+  throw new InvalidConfigurationValueException(FS_DEFAULT_NAME_KEY,

Review Comment:
   Please add test case for this condition



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -434,10 +445,70 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
 }
   }
 
+  public AbfsConfiguration(final Configuration rawConfig, String accountName)
+  throws IllegalAccessException, IOException {
+this(rawConfig, accountName, AbfsServiceType.DFS);
+  }
+
   public Trilean getIsNamespaceEnabledAccount() {
 return Trilean.getTrilean(isNamespaceEnabledAccount);
   }
 
+  /**
+   * Returns the service type to be used based on the filesystem configuration.
+   * Precedence is given to service type configured for FNS Accounts using
+   * "fs.azure.fns.account.service.type". If not configured, then the service
+   * type identified from url used to initialize filesystem will be used.
+   * @return the service type.
+   */
+  public AbfsServiceType getFsConfiguredServiceType() {
+return getEnum(FS_AZURE_FNS_ACCOUNT_SERVICE_TYPE, fsConfiguredServiceType);
+  }
+
+  /**
+   * Returns the service type configured for FNS Accounts to override the
+   * service type identified by URL used to initialize the filesystem.
+   * @return the service type.
+   */
+  public AbfsServiceType getConfiguredServiceTypeForFNSAccounts() {
+return getEnum(FS_AZURE_FNS_ACCOUNT_SERVICE_TYPE, null);
+  }
+
+  /**
+   * Returns the service type to be used for Ingress Operations irrespective 
of account type.
+   * Default value is the same as the service type configured for the file 

[jira] [Commented] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868861#comment-17868861
 ] 

ASF GitHub Bot commented on HADOOP-19237:
-

hadoop-yetus commented on PR #6961:
URL: https://github.com/apache/hadoop/pull/6961#issuecomment-2252079116

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  16m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  23m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   8m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   8m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 18s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  33m 14s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/1/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 2 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  69m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |  19m 52s | 
[/patch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/1/artifact/out/patch-mvninstall-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  compile  |  18m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  17m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  15m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   8m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   8m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +0 :ok: |  spotbugs  |   0m 20s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  shadedclient  |  57m 13s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 760m 35s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 18s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1121m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.registry.server.dns.TestSecureRegistryDNS |
   |   | 
hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6961/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6961 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint

[jira] [Commented] (HADOOP-19219) Resolve Certificate error in Hadoop-auth tests.

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868848#comment-17868848
 ] 

ASF GitHub Bot commented on HADOOP-19219:
-

pan3793 commented on code in PR #6939:
URL: https://github.com/apache/hadoop/pull/6939#discussion_r1692450733


##
hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh:
##
@@ -618,8 +618,7 @@ function hadoop_bootstrap
 
   export HADOOP_OS_TYPE=${HADOOP_OS_TYPE:-$(uname -s)}
 
-  # defaults
-  export HADOOP_OPTS=${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"}
+ export HADOOP_OPTS="${HADOOP_OPTS:-"-Djava.net.preferIPv4Stack=true"} 
-XX:+IgnoreUnrecognizedVMOptions 
--add-exports=java.base/sun.security.x509=ALL-UNNAMED 
--add-exports=java.base/sun.security.util=ALL-UNNAMED"

Review Comment:
   I would suggest adding the JPMS opts in `function 
hadoop_finalize_hadoop_opts` or add a dedicated function 
`hadoop_finalize_jpms_opts`
   
   also, we should add comments on both sides(here and pom.xml), to mention the 
future contributors sync the jpms opts





> Resolve Certificate error in Hadoop-auth tests.
> ---
>
> Key: HADOOP-19219
>     URL: https://issues.apache.org/jira/browse/HADOOP-19219
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Muskan Mishra
>Priority: Major
>  Labels: pull-request-available
>
> While compiling Hadoop-Trunk with JDK17, faced following errors in 
> TestMultiSchemeAuthenticationHandler and 
> TestLdapAuthenticationHandler classes.
> {code:java}
> [INFO] Running 
> org.apache.hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.256 
> s <<< FAILURE! - in 
> org.apache.hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler
> [ERROR] 
> org.apache.hadoop.security.authentication.server.TestMultiSchemeAuthenticationHandler
>   Time elapsed: 1.255 s  <<< ERROR!
> java.lang.IllegalAccessError: class 
> org.apache.directory.server.core.security.CertificateUtil (in unnamed module 
> @0x32e614e9) cannot access class sun.security.x509.X500Name (in module 
> java.base) because module java.base does not export sun.security.x509 to 
> unnamed module @0x32e614e9
> at 
> org.apache.directory.server.core.security.CertificateUtil.createTempKeyStore(CertificateUtil.java:334)
> at 
> org.apache.directory.server.factory.ServerAnnotationProcessor.instantiateLdapServer(ServerAnnotationProcessor.java:158)
> at 
> org.apache.directory.server.factory.ServerAnnotationProcessor.createLdapServer(ServerAnnotationProcessor.java:318)
> at 
> org.apache.directory.server.factory.ServerAnnotationProcessor.createLdapServer(ServerAnnotationProcessor.java:351)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19226) ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob Endpoint for AbfsBlobClient

2024-07-25 Thread Rakesh Radhakrishnan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rakesh Radhakrishnan reassigned HADOOP-19226:
-

Assignee: Anuj Modi

> ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob Endpoint for 
> AbfsBlobClient
> ---
>
> Key: HADOOP-19226
> URL: https://issues.apache.org/jira/browse/HADOOP-19226
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> This is second task in series of tasks for implementing Blob Endpoint support 
> for FNS accounts.
> This patch will have changes to implement all the APIs over Blob Endpoint as 
> a part of implementing AbfsBlobClient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19238) Fix create-release script for arm64 based MacOS

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868845#comment-17868845
 ] 

ASF GitHub Bot commented on HADOOP-19238:
-

pan3793 commented on code in PR #6962:
URL: https://github.com/apache/hadoop/pull/6962#discussion_r1692420326


##
dev-support/bin/create-release:
##
@@ -523,7 +523,6 @@ function dockermode
 echo "USER ${user_name}"
 printf "\n\n"
   ) | docker build -t "${imgname}" -f - "${BASEDIR}"/dev-support/docker/
-

Review Comment:
   nit: unnecessary change





> Fix create-release script for arm64 based MacOS
> ---
>
> Key: HADOOP-19238
> URL: https://issues.apache.org/jira/browse/HADOOP-19238
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19211) AliyunOSS: Support vectored read API

2024-07-25 Thread wujinhu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868841#comment-17868841
 ] 

wujinhu commented on HADOOP-19211:
--

Hi [~ste...@apache.org] I'm on vacation this week, and will keep working on it 
next week. 

> AliyunOSS: Support vectored read API
> 
>
> Key: HADOOP-19211
> URL: https://issues.apache.org/jira/browse/HADOOP-19211
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/oss
>Affects Versions: 3.2.4, 3.3.6
>Reporter: wujinhu
>Assignee: wujinhu
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19238) Fix create-release script for arm64 based MacOS

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868827#comment-17868827
 ] 

ASF GitHub Bot commented on HADOOP-19238:
-

hadoop-yetus commented on PR #6962:
URL: https://github.com/apache/hadoop/pull/6962#issuecomment-2251593008

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  35m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  shadedclient  |  32m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  86m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6962/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6962 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs |
   | uname | Linux 911aeda96bcb 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d1a64db91f2f1d3e4cee50be80a96c7ce0790aa1 |
   | Max. process+thread count | 730 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6962/1/console |
   | versions | git=2.25.1 maven=3.6.3 shellcheck=0.7.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fix create-release script for arm64 based MacOS
> ---
>
> Key: HADOOP-19238
> URL: https://issues.apache.org/jira/browse/HADOOP-19238
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868828#comment-17868828
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

mukund-thakur commented on code in PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#discussion_r1692281548


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/UploadContentProviders.java:
##
@@ -0,0 +1,396 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.impl;
+
+import java.io.BufferedInputStream;
+import java.io.ByteArrayInputStream;
+import java.io.Closeable;
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.InputStream;
+import java.io.UncheckedIOException;
+import java.nio.ByteBuffer;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+import software.amazon.awssdk.http.ContentStreamProvider;
+
+import org.apache.hadoop.classification.VisibleForTesting;
+import org.apache.hadoop.fs.store.ByteBufferInputStream;
+
+import static java.util.Objects.requireNonNull;
+import static org.apache.hadoop.io.IOUtils.cleanupWithLogger;
+import static org.apache.hadoop.util.Preconditions.checkArgument;
+import static 
org.apache.hadoop.util.functional.FunctionalIO.uncheckIOExceptions;
+
+/**
+ * Implementations of {@code 
software.amazon.awssdk.http.ContentStreamProvider}.
+ * 
+ * These are required to ensure that retry of multipart uploads are reliable,
+ * while also avoiding memory copy/consumption overhead.
+ * 
+ * For these reasons the providers built in to the AWS SDK are not used.
+ * 
+ * See HADOOP-19221 for details.
+ */
+public final class UploadContentProviders {
+
+  public static final Logger LOG = 
LoggerFactory.getLogger(UploadContentProviders.class);
+
+  private UploadContentProviders() {
+  }
+
+  /**
+   * Create a content provider from a file.
+   * @param file file to read.
+   * @param offset offset in file.
+   * @param size of data.
+   * @return the provider
+   * @throws IllegalArgumentException if the offset is negative.
+   */
+  public static BaseContentProvider fileContentProvider(
+  File file,
+  long offset,
+  final int size) {
+
+return new FileWithOffsetContentProvider(file, offset, size);
+  }
+
+  /**
+   * Create a content provider from a byte buffer.
+   * The buffer is not copied and MUST NOT be modified while
+   * the upload is taking place.
+   * @param byteBuffer buffer to read.
+   * @param size size of the data.
+   * @return the provider
+   * @throws IllegalArgumentException if the arguments are invalid.
+   * @throws NullPointerException if the buffer is null
+   */
+  public static BaseContentProvider 
byteBufferContentProvider(
+  final ByteBuffer byteBuffer,
+  final int size) {
+
+return new ByteBufferContentProvider(byteBuffer, size);
+  }
+
+  /**
+   * Create a content provider for all or part of a byte array.
+   * The buffer is not copied and MUST NOT be modified while
+   * the upload is taking place.
+   * @param bytes buffer to read.
+   * @param offset offset in buffer.
+   * @param size size of the data.
+   * @return the provider
+   * @throws IllegalArgumentException if the arguments are invalid.
+   * @throws NullPointerException if the buffer is null.
+   */
+  public static BaseContentProvider 
byteArrayContentProvider(
+  final byte[] bytes, final int offset, final int size) {
+return new ByteArrayContentProvider(bytes, offset, size);
+  }
+
+  /**
+   * Create a content provider for all of a byte array.
+   * @param bytes buffer to read.
+   * @return the provider
+   * @throws IllegalArgumentException if the arguments are invalid.
+   * @throws NullPointerException if the buffer is null.
+   */
+  public static BaseContentProvider 
byteArrayContentProvider(
+  final byte[] bytes) {
+return byteArrayContentProvider(bytes, 0, bytes.length);
+  }
+
+  /**
+   * Base class for content providers; tracks the number of times a stream
+   * has been opened.
+   * @param  type of stream created.
+   */
+  @VisibleForTesting
+  public static abstract class BaseContentProvid

[jira] [Commented] (HADOOP-19231) add JacksonUtil to centralise some code

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868822#comment-17868822
 ] 

ASF GitHub Bot commented on HADOOP-19231:
-

hadoop-yetus commented on PR #6953:
URL: https://github.com/apache/hadoop/pull/6953#issuecomment-2251556530

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  8s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  18m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  18m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  16m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  15m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | -1 :x: |  spotbugs  |   1m  0s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/7/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 10s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/7/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  39m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  11m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  17m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 45s |  |  root: The patch generated 
0 new + 635 unchanged - 6 fixed = 635 total (was 641)  |
   | +1 :green_heart: |  mvnsite  |  18m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |  16m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  15m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |  33m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 37s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 48s |  |  hadoop-kms in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 42s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 167m 47s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | -1 :x: |  unit  |  11m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs

[jira] [Updated] (HADOOP-19238) Fix create-release script for arm64 based MacOS

2024-07-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19238:

Labels: pull-request-available  (was: )

> Fix create-release script for arm64 based MacOS
> ---
>
> Key: HADOOP-19238
> URL: https://issues.apache.org/jira/browse/HADOOP-19238
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19238) Fix create-release script for arm64 based MacOS

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868814#comment-17868814
 ] 

ASF GitHub Bot commented on HADOOP-19238:
-

mukund-thakur opened a new pull request, #6962:
URL: https://github.com/apache/hadoop/pull/6962

   
   
   ### Description of PR
   Add proper checks for arm64 based machine.
   
   ### How was this patch tested?
   Tested on MacOS
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Fix create-release script for arm64 based MacOS
> ---
>
> Key: HADOOP-19238
> URL: https://issues.apache.org/jira/browse/HADOOP-19238
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19238) Fix create-release script for arm64 based MacOS

2024-07-25 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur updated HADOOP-19238:
---
Affects Version/s: 3.4.0

> Fix create-release script for arm64 based MacOS
> ---
>
> Key: HADOOP-19238
> URL: https://issues.apache.org/jira/browse/HADOOP-19238
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19238) Fix create-release script for arm64 based MacOS

2024-07-25 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-19238:
--

 Summary: Fix create-release script for arm64 based MacOS
 Key: HADOOP-19238
 URL: https://issues.apache.org/jira/browse/HADOOP-19238
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mukund Thakur






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19231) add JacksonUtil to centralise some code

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868689#comment-17868689
 ] 

ASF GitHub Bot commented on HADOOP-19231:
-

hadoop-yetus commented on PR #6953:
URL: https://github.com/apache/hadoop/pull/6953#issuecomment-2250340480

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  39m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  19m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   5m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  18m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  15m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  14m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | -1 :x: |  spotbugs  |   1m  3s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/6/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 13s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/6/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  42m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 51s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  12m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  20m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  19m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   5m 20s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 635 unchanged - 6 fixed = 638 total (was 
641)  |
   | +1 :green_heart: |  mvnsite  |  18m 21s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  9s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/6/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2
 with JDK Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  javadoc  |  14m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |  35m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  44m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m 11s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m  6s |  |  hadoop-kms in the patch passed

[jira] [Commented] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-25 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868688#comment-17868688
 ] 

Jinglun commented on HADOOP-19236:
--

Thanks very much [~ste...@apache.org] for your patient and detailed comments. 
Especially for point out the right direction. Let me answer the questions.

 

> How far have you come along with this?

We have implemented a version of hadoop-tos integration and some users are 
using it. We need some extra efforts to make it satisfy the Hadoop 
requirements, including design, code style, dependency issue etc.

 

> One issue for incorporating into the hadoop as long-term maintenance and 
> testing. We do have people working on cos/oss maintenance and the reason an 
> expectation that this would continue for tos.

Totally agree long-term maintenance is very important. I'm a hadoop 
committer(mostly work on hdfs) and glad to maintain the hadoop-tos module.

 

> I've had a quick look at your new dependency and awhile it is a licensed 
> appropriately; you're going to have to cut out that okio class. There are 
> also going to be problems with transitive dependencies, especially jackson.

Thanks for your guide. I will fix the dependency problem.

 

> Now the bad news: I cannot personally commit to doing any reviewing of this 
> work, or testing. I'm sorry but I am behind with review PR related to S3A and 
> ABFS and any commitment I make you will be unrealistic. It would be good if 
> you could actually get support for anyone working in one of the other cloud 
> connect modules to see if they would assist.

That indeed is bad news. You are the most experienced expert in the object 
storage filesystem, the work will go much more smoothly with your help. Again 
thanks  a lot for point out the right direction. I'll try to push forward the 
follow-up works.

> Integration of Volcano Engine TOS in Hadoop.
> 
>
> Key: HADOOP-19236
> URL: https://issues.apache.org/jira/browse/HADOOP-19236
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.4.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: Integration of Volcano Engine TOS in Hadoop.pdf
>
>
> Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS 
> is the object storage service of Volcano Engine. A common way is to store 
> data into TOS and run Hadoop/Spark/Flink applications to access TOS. But 
> there is no original support for TOS in hadoop, thus it is not easy for users 
> to build their Big Data System based on TOS.
>  
> This work aims to integrate TOS with Hadoop to help users run their 
> applications on TOS. Users only need to do some simple configuration, then 
> their applications can read/write TOS without any code change. This work is 
> similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
> Storage in Hadoop.
>  
>  Please see the attached document "Integration of Volcano Engine TOS in 
> Hadoop" for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19231) add JacksonUtil to centralise some code

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868663#comment-17868663
 ] 

ASF GitHub Bot commented on HADOOP-19231:
-

hadoop-yetus commented on PR #6953:
URL: https://github.com/apache/hadoop/pull/6953#issuecomment-2250183729

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  18m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  17m  0s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  15m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  15m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | -1 :x: |  spotbugs  |   1m  1s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m  8s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/5/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  42m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  2s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  11m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  21m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  19m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 53s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/5/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 3 new + 637 unchanged - 4 fixed = 640 total (was 
641)  |
   | +1 :green_heart: |  mvnsite  |  17m 12s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 14s | 
[/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/5/artifact/out/results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt)
 |  
hadoop-common-project_hadoop-common-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2
 with JDK Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  javadoc  |  15m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |  34m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 26s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 51s |  |  hadoop-kms in the patch passed

[jira] [Updated] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19237:

Labels: pull-request-available  (was: )

> upgrade dnsjava to 3.6.0 due to CVEs
> 
>
> Key: HADOOP-19237
> URL: https://issues.apache.org/jira/browse/HADOOP-19237
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> See https://github.com/apache/hadoop/pull/6955 - but this is missing the 
> necessary change to LICENSE-binary (which already has an out of date version 
> for dnsjava).
> * CVE-2024-25638 https://github.com/advisories/GHSA-cfxw-4h78-h7fw
> * https://github.com/advisories/GHSA-mmwx-rj87-vfgr
> * https://github.com/advisories/GHSA-crjg-w57m-rqqf



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868656#comment-17868656
 ] 

ASF GitHub Bot commented on HADOOP-19237:
-

pjfanning opened a new pull request, #6961:
URL: https://github.com/apache/hadoop/pull/6961

   
   
   ### Description of PR
   
   HADOOP-19237 - replaces #6955 
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> upgrade dnsjava to 3.6.0 due to CVEs
> 
>
> Key: HADOOP-19237
> URL: https://issues.apache.org/jira/browse/HADOOP-19237
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: PJ Fanning
>Priority: Major
>
> See https://github.com/apache/hadoop/pull/6955 - but this is missing the 
> necessary change to LICENSE-binary (which already has an out of date version 
> for dnsjava).
> * CVE-2024-25638 https://github.com/advisories/GHSA-cfxw-4h78-h7fw
> * https://github.com/advisories/GHSA-mmwx-rj87-vfgr
> * https://github.com/advisories/GHSA-crjg-w57m-rqqf



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-25 Thread PJ Fanning (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PJ Fanning updated HADOOP-19237:

Description: 
See https://github.com/apache/hadoop/pull/6955 - but this is missing the 
necessary change to LICENSE-binary (which already has an out of date version 
for dnsjava).

* CVE-2024-25638 https://github.com/advisories/GHSA-cfxw-4h78-h7fw
* https://github.com/advisories/GHSA-mmwx-rj87-vfgr
* https://github.com/advisories/GHSA-crjg-w57m-rqqf



  was:
See https://github.com/apache/hadoop/pull/6955 - but this is missing the 
necessary change to LICENSE-binary (which already has an out of date version 
for dnsjava).

* CVE-2023-32695
* CVE-2024-25638
* https://github.com/advisories/GHSA-crjg-w57m-rqqf




> upgrade dnsjava to 3.6.0 due to CVEs
> 
>
> Key: HADOOP-19237
> URL: https://issues.apache.org/jira/browse/HADOOP-19237
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: PJ Fanning
>Priority: Major
>
> See https://github.com/apache/hadoop/pull/6955 - but this is missing the 
> necessary change to LICENSE-binary (which already has an out of date version 
> for dnsjava).
> * CVE-2024-25638 https://github.com/advisories/GHSA-cfxw-4h78-h7fw
> * https://github.com/advisories/GHSA-mmwx-rj87-vfgr
> * https://github.com/advisories/GHSA-crjg-w57m-rqqf



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19237) upgrade dnsjava to 3.6.0 due to CVEs

2024-07-25 Thread PJ Fanning (Jira)
PJ Fanning created HADOOP-19237:
---

 Summary: upgrade dnsjava to 3.6.0 due to CVEs
 Key: HADOOP-19237
 URL: https://issues.apache.org/jira/browse/HADOOP-19237
 Project: Hadoop Common
  Issue Type: Task
Reporter: PJ Fanning


See https://github.com/apache/hadoop/pull/6955 - but this is missing the 
necessary change to LICENSE-binary (which already has an out of date version 
for dnsjava).

* CVE-2023-32695
* CVE-2024-25638
* https://github.com/advisories/GHSA-crjg-w57m-rqqf





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19230) upgrade to jackson 2.14.3

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868647#comment-17868647
 ] 

ASF GitHub Bot commented on HADOOP-19230:
-

pjfanning commented on PR #6761:
URL: https://github.com/apache/hadoop/pull/6761#issuecomment-2250096203

   This is the change to Jackson JAX-RS that is forcing Hadoop to stick with 
Jackson 2.12.
   
https://github.com/FasterXML/jackson-jaxrs-providers/issues/134#issuecomment-1180637522
   
   The new jar has that one class (NoContentException) that Jackson 2.13+ needs.
   




> upgrade to jackson 2.14.3
> -
>
> Key: HADOOP-19230
> URL: https://issues.apache.org/jira/browse/HADOOP-19230
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> Follow up to HADOOP-18332
> I have what I believe fixes the Jackson JAX-RS incompatibility.
> https://github.com/pjfanning/jsr311-compat/
> The reason that I want to start by just going to Jackson 2.14 is that Jackson 
> has new StreamReadConstraints in Jackson 2.15 to protect against malicious 
> JSON inputs. The constraints are generous but can cause issues with very 
> large or deeply nested inputs.
> Jackson has had a lot of security hardening fixes recently and it seems 
> problematic to be stuck on an unsupported version of Jackson (2.12).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868644#comment-17868644
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

steveloughran commented on PR #6959:
URL: https://github.com/apache/hadoop/pull/6959#issuecomment-2250077439

   thanks. just testing the trunk code locally, to see if there are any 
problems first




> [ABFS]: ApacheHttpClient adaptation as network library
> --
>
> Key: HADOOP-19120
> URL: https://issues.apache.org/jira/browse/HADOOP-19120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.5.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Apache HttpClient is more feature-rich and flexible and gives application 
> more granular control over networking parameter.
> ABFS currently relies on the JDK-net library. This library is managed by 
> OpenJDK and has no performance problem. However, it limits the application's 
> control over networking, and there are very few APIs and hooks exposed that 
> the application can use to get metrics, choose which and when a connection 
> should be reused. ApacheHttpClient will give important hooks to fetch 
> important metrics and control networking parameters.
> A custom implementation of connection-pool is used. The implementation is 
> adapted from the JDK8 connection pooling. Reasons for doing it:
> 1. PoolingHttpClientConnectionManager heuristic caches all the reusable 
> connections it has created. JDK's implementation only caches limited number 
> of connections. The limit is given by JVM system property 
> "http.maxConnections". If there is no system-property, it defaults to 5. 
> Connection-establishment latency increased with all the connections were 
> cached. Hence, adapting the pooling heuristic of JDK netlib,
> 2. In PoolingHttpClientConnectionManager, it expects the application to 
> provide `setMaxPerRoute` and `setMaxTotal`, which the implementation uses as 
> the total number of connections it can create. For application using ABFS, it 
> is not feasible to provide a value in the initialisation of the 
> connectionManager. JDK's implementation has no cap on the number of 
> connections it can have opened on a moment. Hence, adapting the pooling 
> heuristic of JDK netlib,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-25 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-19236:
---

Assignee: Jinglun

> Integration of Volcano Engine TOS in Hadoop.
> 
>
> Key: HADOOP-19236
> URL: https://issues.apache.org/jira/browse/HADOOP-19236
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.4.0
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Attachments: Integration of Volcano Engine TOS in Hadoop.pdf
>
>
> Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS 
> is the object storage service of Volcano Engine. A common way is to store 
> data into TOS and run Hadoop/Spark/Flink applications to access TOS. But 
> there is no original support for TOS in hadoop, thus it is not easy for users 
> to build their Big Data System based on TOS.
>  
> This work aims to integrate TOS with Hadoop to help users run their 
> applications on TOS. Users only need to do some simple configuration, then 
> their applications can read/write TOS without any code change. This work is 
> similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
> Storage in Hadoop.
>  
>  Please see the attached document "Integration of Volcano Engine TOS in 
> Hadoop" for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-25 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19236:

Affects Version/s: 3.4.0

> Integration of Volcano Engine TOS in Hadoop.
> 
>
> Key: HADOOP-19236
> URL: https://issues.apache.org/jira/browse/HADOOP-19236
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Affects Versions: 3.4.0
>Reporter: Jinglun
>Priority: Major
> Attachments: Integration of Volcano Engine TOS in Hadoop.pdf
>
>
> Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS 
> is the object storage service of Volcano Engine. A common way is to store 
> data into TOS and run Hadoop/Spark/Flink applications to access TOS. But 
> there is no original support for TOS in hadoop, thus it is not easy for users 
> to build their Big Data System based on TOS.
>  
> This work aims to integrate TOS with Hadoop to help users run their 
> applications on TOS. Users only need to do some simple configuration, then 
> their applications can read/write TOS without any code change. This work is 
> similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
> Storage in Hadoop.
>  
>  Please see the attached document "Integration of Volcano Engine TOS in 
> Hadoop" for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-25 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868640#comment-17868640
 ] 

Steve Loughran commented on HADOOP-19236:
-

First, know that you don't have to actually be in our codebase to get picked 
up. As an example Google GCS is something broadly used yet it lives in its own 
project with Google doing most of the development (I do test it though). That 
means getting it into hadoop and waiting for our next release is not a blocker 
to your work. That is particularly important as it will not get reported to 
anything before a 3.4.x release.

This also means that you can implement the code along with the unit and 
integration tests without waiting for any PR to be merged into Hadoop. How far 
have you come along with this?

One issue for incorporating into the hadoop as long-term maintenance and 
testing. We do have people working on cos/oss maintenance and the reason an 
expectation that this would continue for tos.

Having had a quick look at the code I like the separation of file system API 
and actual implementation. We are slowly trying to retrofit that into the S3A 
code. ABFS a lot more well designed here.

I've had a quick look at your new dependency and awhile it is a licensed 
appropriately; you're going to have to cut out that okio class. There are also 
going to be problems with transitive dependencies, especially jackson.

Now the bad news: I cannot personally commit to doing any reviewing of this 
work, or testing. I'm sorry but I am behind with review PR related to S3A and 
ABFS and any commitment I make you will be unrealistic. It would be good if you 
could actually get support for anyone working in one of the other cloud connect 
modules to see if they would assist.

In the meantime
* start with that external repository with the implementation and test suites.
* get on the hadoop developer list and getting involved in discussions there 
-and especially testing forthcoming releases.
* reviewing changes to hadoop-common relevance to you is also important. I will 
highlight that the new bulk delete API designed for Iceberg compaction on cloud 
storage; vector IO can deliver significant speedups for Parquet and ORC. You 
can get familiar with this by reviewing other peoples PRs: 
https://issues.apache.org/jira/browse/HADOOP-19211 . Reviewing other peoples 
work is an essential part of the collaboration process, and a great way for 
everyone to become familiar with you and your work. 

> Integration of Volcano Engine TOS in Hadoop.
> 
>
> Key: HADOOP-19236
> URL: https://issues.apache.org/jira/browse/HADOOP-19236
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Reporter: Jinglun
>Priority: Major
> Attachments: Integration of Volcano Engine TOS in Hadoop.pdf
>
>
> Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS 
> is the object storage service of Volcano Engine. A common way is to store 
> data into TOS and run Hadoop/Spark/Flink applications to access TOS. But 
> there is no original support for TOS in hadoop, thus it is not easy for users 
> to build their Big Data System based on TOS.
>  
> This work aims to integrate TOS with Hadoop to help users run their 
> applications on TOS. Users only need to do some simple configuration, then 
> their applications can read/write TOS without any code change. This work is 
> similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
> Storage in Hadoop.
>  
>  Please see the attached document "Integration of Volcano Engine TOS in 
> Hadoop" for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868541#comment-17868541
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

saxenapranav commented on PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#issuecomment-2249408482

   Thank you very much @steveloughran. Really thankful to you for your time and 
energy in this. Have raised a backport PR 
https://github.com/apache/hadoop/pull/6959 on branch-3.4. Thank you very much!




> [ABFS]: ApacheHttpClient adaptation as network library
> --
>
> Key: HADOOP-19120
> URL: https://issues.apache.org/jira/browse/HADOOP-19120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.5.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Apache HttpClient is more feature-rich and flexible and gives application 
> more granular control over networking parameter.
> ABFS currently relies on the JDK-net library. This library is managed by 
> OpenJDK and has no performance problem. However, it limits the application's 
> control over networking, and there are very few APIs and hooks exposed that 
> the application can use to get metrics, choose which and when a connection 
> should be reused. ApacheHttpClient will give important hooks to fetch 
> important metrics and control networking parameters.
> A custom implementation of connection-pool is used. The implementation is 
> adapted from the JDK8 connection pooling. Reasons for doing it:
> 1. PoolingHttpClientConnectionManager heuristic caches all the reusable 
> connections it has created. JDK's implementation only caches limited number 
> of connections. The limit is given by JVM system property 
> "http.maxConnections". If there is no system-property, it defaults to 5. 
> Connection-establishment latency increased with all the connections were 
> cached. Hence, adapting the pooling heuristic of JDK netlib,
> 2. In PoolingHttpClientConnectionManager, it expects the application to 
> provide `setMaxPerRoute` and `setMaxTotal`, which the implementation uses as 
> the total number of connections it can create. For application using ABFS, it 
> is not feasible to provide a value in the initialisation of the 
> connectionManager. JDK's implementation has no cap on the number of 
> connections it can have opened on a moment. Hence, adapting the pooling 
> heuristic of JDK netlib,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868540#comment-17868540
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

saxenapranav commented on PR #6959:
URL: https://github.com/apache/hadoop/pull/6959#issuecomment-2249402554

   Hi @steveloughran , This is backport of trunk PR 
https://github.com/apache/hadoop/pull/6633. Requesting your kind review please. 
Thank you very much!




> [ABFS]: ApacheHttpClient adaptation as network library
> --
>
> Key: HADOOP-19120
> URL: https://issues.apache.org/jira/browse/HADOOP-19120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.5.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Apache HttpClient is more feature-rich and flexible and gives application 
> more granular control over networking parameter.
> ABFS currently relies on the JDK-net library. This library is managed by 
> OpenJDK and has no performance problem. However, it limits the application's 
> control over networking, and there are very few APIs and hooks exposed that 
> the application can use to get metrics, choose which and when a connection 
> should be reused. ApacheHttpClient will give important hooks to fetch 
> important metrics and control networking parameters.
> A custom implementation of connection-pool is used. The implementation is 
> adapted from the JDK8 connection pooling. Reasons for doing it:
> 1. PoolingHttpClientConnectionManager heuristic caches all the reusable 
> connections it has created. JDK's implementation only caches limited number 
> of connections. The limit is given by JVM system property 
> "http.maxConnections". If there is no system-property, it defaults to 5. 
> Connection-establishment latency increased with all the connections were 
> cached. Hence, adapting the pooling heuristic of JDK netlib,
> 2. In PoolingHttpClientConnectionManager, it expects the application to 
> provide `setMaxPerRoute` and `setMaxTotal`, which the implementation uses as 
> the total number of connections it can create. For application using ABFS, it 
> is not feasible to provide a value in the initialisation of the 
> connectionManager. JDK's implementation has no cap on the number of 
> connections it can have opened on a moment. Hence, adapting the pooling 
> heuristic of JDK netlib,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868539#comment-17868539
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

shameersss1 commented on code in PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#discussion_r1690800271


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ARetryPolicy.java:
##
@@ -228,15 +228,15 @@ protected Map, RetryPolicy> 
createExceptionMap() {
 // throttled requests are can be retried, always
 policyMap.put(AWSServiceThrottledException.class, throttlePolicy);
 
-// Status 5xx error code is an immediate failure
+// Status 5xx error code has historically been treated as an immediate 
failure
 // this is sign of a server-side problem, and while
 // rare in AWS S3, it does happen on third party stores.
 // (out of disk space, etc).
 // by the time we get here, the aws sdk will have
-// already retried.
+// already retried, if it is configured to retry exceptions.
 // there is specific handling for some 5XX codes (501, 503);
 // this is for everything else
-policyMap.put(AWSStatus500Exception.class, fail);
+policyMap.put(AWSStatus500Exception.class, retryAwsClientExceptions);

Review Comment:
   Do we need to selectively retry 500 exception? Say only when the cause is 
"Your socket connection..."





> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
>     URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868538#comment-17868538
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

shameersss1 commented on code in PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#discussion_r1690798138


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/AWSStatus500Exception.java:
##
@@ -22,21 +22,20 @@
 
 /**
  * A 5xx response came back from a service.
- * The 500 error considered retriable by the AWS SDK, which will have already
+ * 
+ * The 500 error is considered retryable by the AWS SDK, which will have 
already
  * tried it {@code fs.s3a.attempts.maximum} times before reaching s3a
  * code.
- * How it handles other 5xx errors is unknown: S3A FS code will treat them
- * as unrecoverable on the basis that they indicate some third-party store
- * or gateway problem.
+ * 
+ * These are rare, but can occur; they are considered retryable.
+ * Note that HADOOP-19221 shows a failure condition where the
+ * SDK itself did not recover on retry from the error.
+ * Mitigation for the specific failure sequence is now in place.
  */
 public class AWSStatus500Exception extends AWSServiceIOException {
   public AWSStatus500Exception(String operation,
   AwsServiceException cause) {
 super(operation, cause);
   }
 
-  @Override
-  public boolean retryable() {

Review Comment:
   Will this make all 500 retriable ? I mean if we S3 throws exception like 500 
S3 Server Internal error. Do we need to retry from S3A client as well ?





> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868535#comment-17868535
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

shameersss1 commented on code in PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#discussion_r1690793840


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java:
##
@@ -927,28 +931,32 @@ private void uploadBlockAsync(final 
S3ADataBlocks.DataBlock block,
   throw e;
 } finally {
   // close the stream and block
-  cleanupWithLogger(LOG, uploadData, block);
+  LOG.debug("closing block");
+  cleanupWithLogger(LOG, uploadData);
+  cleanupWithLogger(LOG, block);
 }
   });
   partETagsFutures.add(partETagFuture);
 }
 
 /**
  * Block awaiting all outstanding uploads to complete.
- * @return list of results
+ * @return list of results or null if interrupted.
  * @throws IOException IO Problems
  */
 private List waitForAllPartUploads() throws IOException {
   LOG.debug("Waiting for {} uploads to complete", partETagsFutures.size());
   try {
 return Futures.allAsList(partETagsFutures).get();
   } catch (InterruptedException ie) {
-LOG.warn("Interrupted partUpload", ie);
-Thread.currentThread().interrupt();
-return null;
+// interruptions are raided if a task is aborted by spark.
+LOG.warn("Interrupted while waiting for uploads to {} to complete", 
key, ie);
+// abort the upload
+abort();
+// then regenerate a new InterruptedIOException
+throw (IOException) new 
InterruptedIOException(ie.toString()).initCause(ie);
   } catch (ExecutionException ee) {
 //there is no way of recovering so abort
-//cancel all partUploads

Review Comment:
   Aren't we cancelling all the uploads here ?





> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868534#comment-17868534
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

shameersss1 commented on code in PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#discussion_r1690765946


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/ByteBufferInputStream.java:
##
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store;
+
+import java.io.EOFException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.nio.ByteBuffer;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FSExceptionMessages;
+import org.apache.hadoop.util.Preconditions;
+
+/**
+ * Provide an input stream from a byte buffer; supporting
+ * {@link #mark(int)}.
+ */
+public final class ByteBufferInputStream extends InputStream {
+  private static final Logger LOG =
+  LoggerFactory.getLogger(DataBlocks.class);

Review Comment:
   Shoudn't this be` ByteBufferInputStream.class` ?





> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread Syed Shameerur Rahman (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868533#comment-17868533
 ] 

Syed Shameerur Rahman commented on HADOOP-19221:


[~ste...@apache.org]  - It was a great analysis and a good catch. Sure i will 
review the PR.

> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-24 Thread Jinglun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868525#comment-17868525
 ] 

Jinglun commented on HADOOP-19236:
--

Hi [~hexiaoqiao], [~ste...@apache.org], [~leosun08],  you are all experts of 
both hadoop and cloud. Could you give some advises of this integration work? 
Thanks very much.

> Integration of Volcano Engine TOS in Hadoop.
> 
>
> Key: HADOOP-19236
> URL: https://issues.apache.org/jira/browse/HADOOP-19236
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Reporter: Jinglun
>Priority: Major
> Attachments: Integration of Volcano Engine TOS in Hadoop.pdf
>
>
> Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS 
> is the object storage service of Volcano Engine. A common way is to store 
> data into TOS and run Hadoop/Spark/Flink applications to access TOS. But 
> there is no original support for TOS in hadoop, thus it is not easy for users 
> to build their Big Data System based on TOS.
>  
> This work aims to integrate TOS with Hadoop to help users run their 
> applications on TOS. Users only need to do some simple configuration, then 
> their applications can read/write TOS without any code change. This work is 
> similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
> Storage in Hadoop.
>  
>  Please see the attached document "Integration of Volcano Engine TOS in 
> Hadoop" for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-24 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-19236:
-
Description: 
Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS is 
the object storage service of Volcano Engine. A common way is to store data 
into TOS and run Hadoop/Spark/Flink applications to access TOS. But there is no 
original support for TOS in hadoop, thus it is not easy for users to build 
their Big Data System based on TOS.
 
This work aims to integrate TOS with Hadoop to help users run their 
applications on TOS. Users only need to do some simple configuration, then 
their applications can read/write TOS without any code change. This work is 
similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
Storage in Hadoop.

 

 Please see the attached document "Integration of Volcano Engine TOS in Hadoop" 
for more details.

  was:
Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS is 
the object storage service of Volcano Engine. A common way is to store data 
into TOS and run Hadoop/Spark/Flink applications to access TOS. But there is no 
original support for TOS in hadoop, thus it is not easy for users to build 
their Big Data System based on TOS.
 
This work aims to integrate TOS with Hadoop to help users run their 
applications on TOS. Users only need to do some simple configuration, then 
their applications can read/write TOS without any code change. This work is 
similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
Storage in Hadoop.


> Integration of Volcano Engine TOS in Hadoop.
> 
>
> Key: HADOOP-19236
> URL: https://issues.apache.org/jira/browse/HADOOP-19236
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Reporter: Jinglun
>Priority: Major
> Attachments: Integration of Volcano Engine TOS in Hadoop.pdf
>
>
> Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS 
> is the object storage service of Volcano Engine. A common way is to store 
> data into TOS and run Hadoop/Spark/Flink applications to access TOS. But 
> there is no original support for TOS in hadoop, thus it is not easy for users 
> to build their Big Data System based on TOS.
>  
> This work aims to integrate TOS with Hadoop to help users run their 
> applications on TOS. Users only need to do some simple configuration, then 
> their applications can read/write TOS without any code change. This work is 
> similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
> Storage in Hadoop.
>  
>  Please see the attached document "Integration of Volcano Engine TOS in 
> Hadoop" for more details.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-24 Thread Jinglun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jinglun updated HADOOP-19236:
-
Attachment: Integration of Volcano Engine TOS in Hadoop.pdf

> Integration of Volcano Engine TOS in Hadoop.
> 
>
> Key: HADOOP-19236
> URL: https://issues.apache.org/jira/browse/HADOOP-19236
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, tools
>Reporter: Jinglun
>Priority: Major
> Attachments: Integration of Volcano Engine TOS in Hadoop.pdf
>
>
> Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS 
> is the object storage service of Volcano Engine. A common way is to store 
> data into TOS and run Hadoop/Spark/Flink applications to access TOS. But 
> there is no original support for TOS in hadoop, thus it is not easy for users 
> to build their Big Data System based on TOS.
>  
> This work aims to integrate TOS with Hadoop to help users run their 
> applications on TOS. Users only need to do some simple configuration, then 
> their applications can read/write TOS without any code change. This work is 
> similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
> Storage in Hadoop.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19236) Integration of Volcano Engine TOS in Hadoop.

2024-07-24 Thread Jinglun (Jira)
Jinglun created HADOOP-19236:


 Summary: Integration of Volcano Engine TOS in Hadoop.
 Key: HADOOP-19236
 URL: https://issues.apache.org/jira/browse/HADOOP-19236
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs, tools
Reporter: Jinglun


Volcano Engine is a fast growing cloud vendor launched by ByteDance, and TOS is 
the object storage service of Volcano Engine. A common way is to store data 
into TOS and run Hadoop/Spark/Flink applications to access TOS. But there is no 
original support for TOS in hadoop, thus it is not easy for users to build 
their Big Data System based on TOS.
 
This work aims to integrate TOS with Hadoop to help users run their 
applications on TOS. Users only need to do some simple configuration, then 
their applications can read/write TOS without any code change. This work is 
similar to AWS S3, AzureBlob, AliyunOSS, Tencnet COS and HuaweiCloud Object 
Storage in Hadoop.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868442#comment-17868442
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

hadoop-yetus commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2248543856

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   7m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   2m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   8m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   7m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 50s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  7s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6938 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 2e841e87f25a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1fb04e9bef5b1d334472d40cd2e6b8e45d9b56d1 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/12/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U

[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868438#comment-17868438
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

hadoop-yetus commented on PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#issuecomment-2248536907

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   8m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   2m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 20s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   8m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   8m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 37s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 150m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6789 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 0378336d9b15 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ce8dac0c544508a002790d4a8c5e09251b6500ff |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/19/testReport/ |
   | Max. process+thread count | 1282 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U

[jira] [Commented] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-24 Thread Jian Zhang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868437#comment-17868437
 ] 

Jian Zhang commented on HADOOP-19235:
-

[~szetszwo]  thank you for your review!

> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
> Attachments: HADOOP-19235.patch
>
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-24 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze updated HADOOP-19235:

Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
 Assignee: Jian Zhang
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

The pull request is now merged.  Thanks, [~Keepromise]!

> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Assignee: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
> Attachments: HADOOP-19235.patch
>
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868432#comment-17868432
 ] 

ASF GitHub Bot commented on HADOOP-19235:
-

szetszwo merged PR #6888:
URL: https://github.com/apache/hadoop/pull/6888




> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-19235.patch
>
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868429#comment-17868429
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

hadoop-yetus commented on PR #6959:
URL: https://github.com/apache/hadoop/pull/6959#issuecomment-2248484596

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  4s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 21 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 12s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 25s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  27m 11s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  21m 35s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   5m  8s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 27s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 59s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  38m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  24m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  22m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 58s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6959/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 18 unchanged - 0 fixed = 20 total (was 
18)  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 51s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 33s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 284m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6959/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6959 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux a5589c6342ac 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 48ba0ad1fa40d984dd1b21892fb3698514852564 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08

[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868428#comment-17868428
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

hadoop-yetus commented on PR #6959:
URL: https://github.com/apache/hadoop/pull/6959#issuecomment-2248483299

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 21 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 48s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  26m 39s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  23m 40s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 54s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 35s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 59s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  37m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  24m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  22m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 49s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6959/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 18 unchanged - 0 fixed = 20 total (was 
18)  |
   | +1 :green_heart: |  mvnsite  |   2m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 44s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 282m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6959/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6959 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux ac10b97b1a59 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 48ba0ad1fa40d984dd1b21892fb3698514852564 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08

[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868413#comment-17868413
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

hadoop-yetus commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2248371220

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  16m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  16m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  16m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 56s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 55s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 244m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6938 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 167f5872e75e 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b18794223f3602fc565ae3783a7b6d99bf15b70e |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/11/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U

[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868392#comment-17868392
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

steveloughran commented on code in PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#discussion_r1689944706


##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md:
##
@@ -276,7 +283,45 @@ Fix: Use one of the dedicated [S3A 
Committers](committers.md).
 
 ##  Options to Tune
 
-###  Thread and connection pool settings.
+###  Performance Flags: `fs.s3a.performance.flag`
+
+This option takes a comma separated list of performance flags.
+View it as the equivalent of the `-O` compiler optimization list C/C++ 
compilers offer.
+That is a complicated list of options which deliver speed if the person 
setting them
+understands the risks.

Review Comment:
   no, its an individual who takes the blame



##
hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/performance.md:
##
@@ -276,7 +283,45 @@ Fix: Use one of the dedicated [S3A 
Committers](committers.md).
 
 ##  Options to Tune
 
-###  Thread and connection pool settings.
+###  Performance Flags: `fs.s3a.performance.flag`
+
+This option takes a comma separated list of performance flags.
+View it as the equivalent of the `-O` compiler optimization list C/C++ 
compilers offer.
+That is a complicated list of options which deliver speed if the person 
setting them
+understands the risks.
+
+* The list of flags MAY change across releases
+* The semantics of specific flags SHOULD NOT change across releases.
+* If an option is to be tuned which may relax semantics, a new option MUST be 
defined.
+* Unknown flags are ignored; this is to avoid compatibility.
+* The option `*` means "turn everything on". This is implicitly unstable 
across releases.
+
+| *Option* | *Meaning*  | Since |
+|--||:--|
+| `create` | Create Performance | 3.4.1 |

Review Comment:
   no





> S3A: option "fs.s3a.performance.flags" to take list of performance flags
> 
>
> Key: HADOOP-19161
> URL: https://issues.apache.org/jira/browse/HADOOP-19161
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> HADOOP-19072 shows we want to add more optimisations than that of 
> HADOOP-18930.
> * Extending the new optimisations to the existing option is brittle
> * Adding explicit options for each feature gets complext fast.
> Proposed
> * A new class S3APerformanceFlags keeps all the flags
> * it build this from a string[] of values, which can be extracted from 
> getConf(),
> * and it can also support a "*" option to mean "everything"
> * this class can also be handed off to hasPathCapability() and do the right 
> thing.
> Proposed optimisations
> * create file (we will hook up HADOOP-18930)
> * mkdir (HADOOP-19072)
> * delete (probe for parent path)
> * rename (probe for source path)
> We could think of more, with different names, later.
> The goal is make it possible to strip out every HTTP request we do for 
> safety/posix compliance, so applications have the option of turning off what 
> they don't need.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19231) add JacksonUtil to centralise some code

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868371#comment-17868371
 ] 

ASF GitHub Bot commented on HADOOP-19231:
-

hadoop-yetus commented on PR #6953:
URL: https://github.com/apache/hadoop/pull/6953#issuecomment-2247956464

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   5m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  17m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  15m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  15m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | -1 :x: |  spotbugs  |   0m 59s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 13s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/4/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  39m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  40m  2s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 35s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  10m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  18m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 47s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 637 unchanged - 4 fixed = 639 total (was 
641)  |
   | +1 :green_heart: |  mvnsite  |  17m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |  15m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  15m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |  31m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 28s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 49s |  |  hadoop-kms in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 41s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 261m 55s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR

[jira] [Commented] (HADOOP-19187) ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting both DFS and Blob Endpoint

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868361#comment-17868361
 ] 

ASF GitHub Bot commented on HADOOP-19187:
-

hadoop-yetus commented on PR #6879:
URL: https://github.com/apache/hadoop/pull/6879#issuecomment-2247821581

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  39m 27s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6879/11/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 2 new + 15 unchanged - 3 
fixed = 17 total (was 18)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  
hadoop-tools_hadoop-azure-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 with 
JDK Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 generated 0 new + 11 unchanged 
- 4 fixed = 11 total (was 15)  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  
hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08 with 
JDK Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 generated 0 new + 11 
unchanged - 4 fixed = 11 total (was 15)  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 34s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 160m 16s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6879/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6879 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux dd3de87570ed 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b73f066910772b206083a454d627ab31dd46bd71 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08

[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868348#comment-17868348
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

hadoop-yetus commented on PR #6959:
URL: https://github.com/apache/hadoop/pull/6959#issuecomment-2247729029

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 21 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 33s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 42s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  19m 39s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 43s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 44s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  39m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  18m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 40s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6959/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 4 new + 18 unchanged - 0 fixed = 22 total (was 
18)  |
   | +1 :green_heart: |  mvnsite  |   2m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 33s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 280m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6959/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6959 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux d1f7dd27beb9 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 99c6095b404d3a5e5a89ad529449eeefe1509980 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08

[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868342#comment-17868342
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

steveloughran commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2247701393

   s3 london with: -Dparallel-tests -DtestsThreadCount=8 -Dscale
   
   This is ready to be reviewed. @mukund-thakur, @HarshitGupta11 and 
@shameersss1 could you all look at this?
   




> S3A: Unable to recover from failure of multipart block upload attempt "Status 
> Code: 400; Error Code: RequestTimeout"
> 
>
> Key: HADOOP-19221
> URL: https://issues.apache.org/jira/browse/HADOOP-19221
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> If a multipart PUT request fails for some reason (e.g. networrk error) then 
> all subsequent retry attempts fail with a 400 Response and ErrorCode 
> RequestTimeout .
> {code}
> Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout; Request ID:; S3 Extended 
> Request ID:
> {code}
> The list of supporessed exceptions contains the root cause (the initial 
> failure was a 500); all retries failed to upload properly from the source 
> input stream {{RequestBody.fromInputStream(fileStream, size)}}.
> Hypothesis: the mark/reset stuff doesn't work for input streams. On the v1 
> sdk we would build a multipart block upload request passing in (file, offset, 
> length), the way we are now doing this doesn't recover.
> probably fixable by providing our own {{ContentStreamProvider}} 
> implementations for
> # file + offset + length
> # bytebuffer
> # byte array
> The sdk does have explicit support for the memory ones, but they copy the 
> data blocks first. we don't want that as it would double the memory 
> requirements of active blocks.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18542) Azure Token provider requires tenant and client IDs despite being optional

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868312#comment-17868312
 ] 

ASF GitHub Bot commented on HADOOP-18542:
-

hadoop-yetus commented on PR #4262:
URL: https://github.com/apache/hadoop/pull/4262#issuecomment-2247374679

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 57s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4262 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dd6531721ab0 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b32b2218e70a8ec22762ad61e61912dd6a988823 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/15/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/15/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Azure Token provider requires tenant and client IDs despite be

[jira] [Commented] (HADOOP-18542) Azure Token provider requires tenant and client IDs despite being optional

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868299#comment-17868299
 ] 

ASF GitHub Bot commented on HADOOP-18542:
-

CLevasseur commented on code in PR #4262:
URL: https://github.com/apache/hadoop/pull/4262#discussion_r1689341481


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java:
##
@@ -444,6 +439,30 @@ private static void testMissingConfigKey(final 
AbfsConfiguration abfsConf,
 () -> abfsConf.getTokenProvider().getClass().getTypeName(;
   }
 
+  @Test
+  public void testClientAndTenantIdOptionalWhenUsingMsiTokenProvider() throws 
Throwable {
+  final String accountName = "account";
+  final Configuration conf = new Configuration();
+  final AbfsConfiguration abfsConf = new AbfsConfiguration(conf, 
accountName);
+
+  final String accountNameSuffix = "." + abfsConf.getAccountName();
+  String authKey = FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME + 
accountNameSuffix;
+  String providerClassKey = "";
+  String providerClassValue = "";
+
+  providerClassKey = FS_AZURE_ACCOUNT_TOKEN_PROVIDER_TYPE_PROPERTY_NAME + 
accountNameSuffix;
+  providerClassValue = TEST_OAUTH_MSI_TOKEN_PROVIDER_CLASS_CONFIG;
+
+  abfsConf.set(authKey, AuthType.OAuth.toString());
+  abfsConf.set(providerClassKey, providerClassValue);
+
+  AccessTokenProvider tokenProviderTypeName = abfsConf.getTokenProvider();
+  // Test that we managed to instantiate an MsiTokenProvider without 
having to define the tenant and client ID.
+  // Those 2 fields are optional as they can automatically be determined 
by the Azure Metadata service when
+  // running on an Azure VM.
+  
Assertions.assertThat(tokenProviderTypeName).isInstanceOf(MsiTokenProvider.class);

Review Comment:
   done





> Azure Token provider requires tenant and client IDs despite being optional
> --
>
> Key: HADOOP-18542
> URL: https://issues.apache.org/jira/browse/HADOOP-18542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Carl
>Priority: Major
>  Labels: pull-request-available
>
> The `AbfsConfiguration` class requires that we provide a tenant and client ID 
> when using the `MsiTokenProvider` class to fetch an authentication token. The 
> bug is that those fields are not required by the Azure API, which can infer 
> those fields when the call is made from an Azure instance.
> The fix is to make tenant and client ID optional when getting an Azure token 
> from the Azure Metadata Service.
> A fix has been submitted here: [https://github.com/apache/hadoop/pull/4262]
> The bug was introduced with HADOOP-17725  
> ([https://github.com/apache/hadoop/pull/3041/files])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868274#comment-17868274
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

saxenapranav commented on PR #6959:
URL: https://github.com/apache/hadoop/pull/6959#issuecomment-2247090911

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   
   [WARNING] Tests run: 153, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 644, Failures: 0, Errors: 0, Skipped: 82
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 57
   
   
   HNS-SharedKey
   
   
   [WARNING] Tests run: 153, Failures: 0, Errors: 0, Skipped: 3
   [WARNING] Tests run: 644, Failures: 0, Errors: 0, Skipped: 34
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 44
   
   
   NonHNS-SharedKey
   
   
   [WARNING] Tests run: 153, Failures: 0, Errors: 0, Skipped: 9
   [WARNING] Tests run: 628, Failures: 0, Errors: 0, Skipped: 274
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 47
   
   
   AppendBlob-HNS-OAuth
   
   
   [WARNING] Tests run: 153, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 644, Failures: 0, Errors: 0, Skipped: 84
   [WARNING] Tests run: 424, Failures: 0, Errors: 0, Skipped: 81
   
   Time taken: 28 mins 30 secs.
   azureuser@pranav-ind-vm:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit 99c6095b404d3a5e5a89ad529449eeefe1509980 (HEAD -> 
saxenapranav/abfs-apachehttpclient-3.4, 
origin/saxenapranav/abfs-apachehttpclient-3.4)
   Author: Pranav Saxena <>
   Date:   Tue Jul 23 21:42:37 2024 -0700
   
   cherrypick of b60497ff41e1dc149d1610f4cc6ea4e0609f9946 : 
https://github.com/apache/hadoop/commit/b60497ff41e1dc149d1610f4cc6ea4e0609f9946
 :  ApacheHttpClient adaptation in ABFS.
   https://github.com/apache/hadoop/pull/6633




> [ABFS]: ApacheHttpClient adaptation as network library
> --
>
> Key: HADOOP-19120
> URL: https://issues.apache.org/jira/browse/HADOOP-19120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.5.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Apache HttpClient is more feature-rich and flexible and gives application 
> more granular control over networking parameter.
> ABFS currently relies on the JDK-net library. This library is managed by 
> OpenJDK and has no performance problem. However, it limits the application's 
> control over networking, and there are very few APIs and hooks exposed that 
> the application can use to get metrics, choose which and when a connection 
> should be reused. ApacheHttpClient will give important hooks to fetch 
> important metrics and control networking parameters.
> A custom implementation of connection-pool is used. The implementation is 
> adapted from the JDK8 connection pooling. Reasons for doing it:
> 1. PoolingHttpClientConnectionManager heuristic caches all the reusable 
> connections it has created. JDK's implementation only caches limited number 
> of connections. The limit is given by JVM system property 
> "http.maxConnections". If there is no system-property, it defaults to 5. 
> Connection-establishment latency increased with all the connections were 
> cached. Hence, adapting the pooling heuristic of JDK netlib,
> 2. In PoolingHttpClientConnectionManager, it expects the application to 
> provide `setMaxPerRoute` and `setMaxTotal`, which the implementation uses as 
> the total number of connections it can create. For application using ABFS, it 
> is not feasible to provide a value in the initialisation of the 
> connectionManager. JDK's implementation has no cap on the number of 
> connections it can have opened on a moment. Hence, adapting the pooling 
> heuristic of JDK netlib,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19120) [ABFS]: ApacheHttpClient adaptation as network library

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868272#comment-17868272
 ] 

ASF GitHub Bot commented on HADOOP-19120:
-

saxenapranav opened a new pull request, #6959:
URL: https://github.com/apache/hadoop/pull/6959

   JIRA: https://issues.apache.org/jira/browse/HADOOP-19120
   
   trunk pr: https://github.com/apache/hadoop/pull/6633
   
   Apache httpclient 4.5.x is the new default implementation of http 
connections;
   this supports a large configurable pool of connections along with
   the ability to limit their lifespan.
   
   The networking library can be chosen using the configuration
   option fs.azure.networking.library
   
   The supported values are
   - APACHE_HTTP_CLIENT : Use Apache HttpClient [Default]
   - JDK_HTTP_URL_CONNECTION : Use JDK networking library
   




> [ABFS]: ApacheHttpClient adaptation as network library
> --
>
> Key: HADOOP-19120
> URL: https://issues.apache.org/jira/browse/HADOOP-19120
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.5.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Apache HttpClient is more feature-rich and flexible and gives application 
> more granular control over networking parameter.
> ABFS currently relies on the JDK-net library. This library is managed by 
> OpenJDK and has no performance problem. However, it limits the application's 
> control over networking, and there are very few APIs and hooks exposed that 
> the application can use to get metrics, choose which and when a connection 
> should be reused. ApacheHttpClient will give important hooks to fetch 
> important metrics and control networking parameters.
> A custom implementation of connection-pool is used. The implementation is 
> adapted from the JDK8 connection pooling. Reasons for doing it:
> 1. PoolingHttpClientConnectionManager heuristic caches all the reusable 
> connections it has created. JDK's implementation only caches limited number 
> of connections. The limit is given by JVM system property 
> "http.maxConnections". If there is no system-property, it defaults to 5. 
> Connection-establishment latency increased with all the connections were 
> cached. Hence, adapting the pooling heuristic of JDK netlib,
> 2. In PoolingHttpClientConnectionManager, it expects the application to 
> provide `setMaxPerRoute` and `setMaxTotal`, which the implementation uses as 
> the total number of connections it can create. For application using ABFS, it 
> is not feasible to provide a value in the initialisation of the 
> connectionManager. JDK's implementation has no cap on the number of 
> connections it can have opened on a moment. Hence, adapting the pooling 
> heuristic of JDK netlib,



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18542) Azure Token provider requires tenant and client IDs despite being optional

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868259#comment-17868259
 ] 

ASF GitHub Bot commented on HADOOP-18542:
-

anujmodi2021 commented on code in PR #4262:
URL: https://github.com/apache/hadoop/pull/4262#discussion_r1689199461


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAccountConfiguration.java:
##
@@ -444,6 +439,30 @@ private static void testMissingConfigKey(final 
AbfsConfiguration abfsConf,
 () -> abfsConf.getTokenProvider().getClass().getTypeName(;
   }
 
+  @Test
+  public void testClientAndTenantIdOptionalWhenUsingMsiTokenProvider() throws 
Throwable {
+  final String accountName = "account";
+  final Configuration conf = new Configuration();
+  final AbfsConfiguration abfsConf = new AbfsConfiguration(conf, 
accountName);
+
+  final String accountNameSuffix = "." + abfsConf.getAccountName();
+  String authKey = FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME + 
accountNameSuffix;
+  String providerClassKey = "";
+  String providerClassValue = "";
+
+  providerClassKey = FS_AZURE_ACCOUNT_TOKEN_PROVIDER_TYPE_PROPERTY_NAME + 
accountNameSuffix;
+  providerClassValue = TEST_OAUTH_MSI_TOKEN_PROVIDER_CLASS_CONFIG;
+
+  abfsConf.set(authKey, AuthType.OAuth.toString());
+  abfsConf.set(providerClassKey, providerClassValue);
+
+  AccessTokenProvider tokenProviderTypeName = abfsConf.getTokenProvider();
+  // Test that we managed to instantiate an MsiTokenProvider without 
having to define the tenant and client ID.
+  // Those 2 fields are optional as they can automatically be determined 
by the Azure Metadata service when
+  // running on an Azure VM.
+  
Assertions.assertThat(tokenProviderTypeName).isInstanceOf(MsiTokenProvider.class);

Review Comment:
   Nit: Add a description to the assertions so that if it fails, user know the 
expectation. Something like:
   `Assertions.assertThat(tokenProviderTypeName).decribedAs("Token Provider 
Should be MsiTokenProvider").isInstanceOf(MsiTokenProvider.class);`





> Azure Token provider requires tenant and client IDs despite being optional
> --
>
> Key: HADOOP-18542
> URL: https://issues.apache.org/jira/browse/HADOOP-18542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Carl
>Priority: Major
>  Labels: pull-request-available
>
> The `AbfsConfiguration` class requires that we provide a tenant and client ID 
> when using the `MsiTokenProvider` class to fetch an authentication token. The 
> bug is that those fields are not required by the Azure API, which can infer 
> those fields when the call is made from an Azure instance.
> The fix is to make tenant and client ID optional when getting an Azure token 
> from the Azure Metadata Service.
> A fix has been submitted here: [https://github.com/apache/hadoop/pull/4262]
> The bug was introduced with HADOOP-17725  
> ([https://github.com/apache/hadoop/pull/3041/files])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18542) Azure Token provider requires tenant and client IDs despite being optional

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868258#comment-17868258
 ] 

ASF GitHub Bot commented on HADOOP-18542:
-

anujmodi2021 commented on PR #4262:
URL: https://github.com/apache/hadoop/pull/4262#issuecomment-2246988401

   > I've rebased the branch onto the latest trunk, you should be good to run 
the integration tests
   
   Thanks @CLevasseur 
   I have ran tests and added result above.
   
   The failures reported are known and we have a PR already out that fixes 
those tests.
   This is good to merge from my side. 
   
   I see there are some PR checks failing. Might be intermittent, you can make 
a small commit to re-trigger that.




> Azure Token provider requires tenant and client IDs despite being optional
> --
>
> Key: HADOOP-18542
> URL: https://issues.apache.org/jira/browse/HADOOP-18542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Carl
>Priority: Major
>  Labels: pull-request-available
>
> The `AbfsConfiguration` class requires that we provide a tenant and client ID 
> when using the `MsiTokenProvider` class to fetch an authentication token. The 
> bug is that those fields are not required by the Azure API, which can infer 
> those fields when the call is made from an Azure instance.
> The fix is to make tenant and client ID optional when getting an Azure token 
> from the Azure Metadata Service.
> A fix has been submitted here: [https://github.com/apache/hadoop/pull/4262]
> The bug was introduced with HADOOP-17725  
> ([https://github.com/apache/hadoop/pull/3041/files])



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18542) Azure Token provider requires tenant and client IDs despite being optional

2024-07-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868257#comment-17868257
 ] 

ASF GitHub Bot commented on HADOOP-18542:
-

anujmodi2021 commented on PR #4262:
URL: https://github.com/apache/hadoop/pull/4262#issuecomment-2246985287

   --
    AGGREGATED TEST RESULT 
   
   
   HNS-OAuth
   
   
   [ERROR] 
testBackoffRetryMetrics(org.apache.hadoop.fs.azurebfs.services.TestAbfsRestOperation)
  Time elapsed: 3.604 s  <<< ERROR!
   [ERROR] 
testReadFooterMetrics(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics) 
 Time elapsed: 1.503 s  <<< ERROR!
   [ERROR] 
testMetricWithIdlePeriod(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.371 s  <<< ERROR!
   [ERROR] 
testReadFooterMetricsWithParquetAndNonParquet(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.372 s  <<< ERROR!
   
   [ERROR] Tests run: 143, Failures: 0, Errors: 1, Skipped: 2
   [ERROR] Tests run: 626, Failures: 0, Errors: 3, Skipped: 76
   [WARNING] Tests run: 414, Failures: 0, Errors: 0, Skipped: 57
   
   
   HNS-SharedKey
   
   
   [ERROR] 
testBackoffRetryMetrics(org.apache.hadoop.fs.azurebfs.services.TestAbfsRestOperation)
  Time elapsed: 3.124 s  <<< ERROR!
   [ERROR] 
testReadFooterMetrics(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics) 
 Time elapsed: 1.026 s  <<< ERROR!
   [ERROR] 
testMetricWithIdlePeriod(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 0.905 s  <<< ERROR!
   [ERROR] 
testReadFooterMetricsWithParquetAndNonParquet(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.012 s  <<< ERROR!
   
   [ERROR] Tests run: 143, Failures: 0, Errors: 1, Skipped: 3
   [ERROR] Tests run: 626, Failures: 0, Errors: 3, Skipped: 28
   [WARNING] Tests run: 414, Failures: 0, Errors: 0, Skipped: 44
   
   
   NonHNS-SharedKey
   
   
   [ERROR] 
testBackoffRetryMetrics(org.apache.hadoop.fs.azurebfs.services.TestAbfsRestOperation)
  Time elapsed: 3.484 s  <<< ERROR!
   [ERROR] 
testReadFooterMetrics(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics) 
 Time elapsed: 1.076 s  <<< ERROR!
   [ERROR] 
testMetricWithIdlePeriod(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.079 s  <<< ERROR!
   [ERROR] 
testReadFooterMetricsWithParquetAndNonParquet(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.028 s  <<< ERROR!
   
   [ERROR] Tests run: 143, Failures: 0, Errors: 1, Skipped: 9
   [ERROR] Tests run: 610, Failures: 0, Errors: 3, Skipped: 268
   [WARNING] Tests run: 414, Failures: 0, Errors: 0, Skipped: 47
   
   
   AppendBlob-HNS-OAuth
   
   
   [ERROR] 
testBackoffRetryMetrics(org.apache.hadoop.fs.azurebfs.services.TestAbfsRestOperation)
  Time elapsed: 3.495 s  <<< ERROR!
   [ERROR] 
testReadFooterMetrics(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics) 
 Time elapsed: 1.195 s  <<< ERROR!
   [ERROR] 
testMetricWithIdlePeriod(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.239 s  <<< ERROR!
   [ERROR] 
testReadFooterMetricsWithParquetAndNonParquet(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.205 s  <<< ERROR!
   
   [ERROR] Tests run: 143, Failures: 0, Errors: 1, Skipped: 2
   [ERROR] Tests run: 626, Failures: 0, Errors: 3, Skipped: 78
   [WARNING] Tests run: 414, Failures: 0, Errors: 0, Skipped: 81
   
   Time taken: 55 mins 19 secs.
   




> Azure Token provider requires tenant and client IDs despite being optional
> --
>
> Key: HADOOP-18542
> URL: https://issues.apache.org/jira/browse/HADOOP-18542
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure, hadoop-thirdparty
>Affects Versions: 3.3.2, 3.3.3, 3.3.4
>Reporter: Carl
>Priority: Major
>  Labels: pull-request-available
>
> The `AbfsConfiguration` class requires that we provide a tenant and client ID 
> when using the `MsiTokenProvider` class to fetch an authentication token. The 
> bug is that those fields are not required by the Azure API, which ca

[jira] [Commented] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868241#comment-17868241
 ] 

ASF GitHub Bot commented on HADOOP-19235:
-

hadoop-yetus commented on PR #6888:
URL: https://github.com/apache/hadoop/pull/6888#issuecomment-2246801681

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 50s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |  10m  4s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   9m 18s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 54s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 24s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  22m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  10m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   9m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 114 unchanged 
- 25 fixed = 114 total (was 139)  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 55s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 146m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6888 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 77bcda98d1ef 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / 09c2feb70737956a0e74b937dd3567804b5fb98e |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/15/testReport/ |
   | Max. process+thread count | 3149 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/15/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus

[jira] [Commented] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868231#comment-17868231
 ] 

ASF GitHub Bot commented on HADOOP-19235:
-

hadoop-yetus commented on PR #6888:
URL: https://github.com/apache/hadoop/pull/6888#issuecomment-2246764742

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 50s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   9m  6s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   8m 34s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   1m  0s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  22m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   9m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   9m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 114 unchanged 
- 25 fixed = 114 total (was 139)  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 20s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6888 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dcfc60373752 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / f0f245452255f501f220146f64ec63dff82dc1a6 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/14/testReport/ |
   | Max. process+thread count | 1281 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/14/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus

[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868174#comment-17868174
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

hadoop-yetus commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2246199183

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  16m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  16m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  16m  9s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/9/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   4m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 55s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  6s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 245m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6938 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 4a82cf4a33aa 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2790d56d98c099420d81150c77360679ed5a7940 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/

[jira] [Commented] (HADOOP-19221) S3A: Unable to recover from failure of multipart block upload attempt "Status Code: 400; Error Code: RequestTimeout"

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868161#comment-17868161
 ] 

ASF GitHub Bot commented on HADOOP-19221:
-

hadoop-yetus commented on PR #6938:
URL: https://github.com/apache/hadoop/pull/6938#issuecomment-2245970791

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 9 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  19m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   7m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   2m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   8m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   7m 54s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/10/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 38s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 146m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6938/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6938 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux dd01dc452024 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b95ee7c72d7f010ff7ece5e1224515a58cff15a6 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/

[jira] [Commented] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868143#comment-17868143
 ] 

ASF GitHub Bot commented on HADOOP-19235:
-

hadoop-yetus commented on PR #6888:
URL: https://github.com/apache/hadoop/pull/6888#issuecomment-2245868417

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 30s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   9m  4s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   8m 53s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 39s |  |  HDFS-17531 passed  |
   | -1 :x: |  shadedclient  |  23m 24s |  |  branch has errors when building 
and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  compile  |   0m 20s | 
[/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/13/artifact/out/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.  |
   | -1 :x: |  javac  |   0m 20s | 
[/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/13/artifact/out/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.  |
   | -1 :x: |  compile  |   0m 20s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/13/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08.  |
   | -1 :x: |  javac  |   0m 20s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/13/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 45s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 114 unchanged 
- 25 fixed = 114 total (was 139)  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 23s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |  36m 50s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 22s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/13/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +0 :ok: |  asflicense  |   0m 24s |  |  ASF License check generated no 
output?  |
   |  |   | 122m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6888/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache

[jira] [Updated] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread Jian Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian Zhang updated HADOOP-19235:

Attachment: HADOOP-19235.patch
Status: Patch Available  (was: Open)

> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-19235.patch
>
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19197) S3A: Support AWS KMS Encryption Context

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868113#comment-17868113
 ] 

ASF GitHub Bot commented on HADOOP-19197:
-

steveloughran merged PR #6874:
URL: https://github.com/apache/hadoop/pull/6874




> S3A: Support AWS KMS Encryption Context
> ---
>
> Key: HADOOP-19197
> URL: https://issues.apache.org/jira/browse/HADOOP-19197
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Raphael Azzolini
>Priority: Major
>  Labels: pull-request-available
>
> S3A properties allow users to choose the AWS KMS key 
> ({_}fs.s3a.encryption.key{_}) and S3 encryption algorithm to be used 
> (f{_}s.s3a.encryption.algorithm{_}). In addition to the AWS KMS Key, an 
> encryption context can be used as non-secret data that adds additional 
> integrity and authenticity to check the encrypted data. However, there is no 
> option to specify the [AWS KMS Encryption 
> Context|https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context]
>  in S3A.
> In AWS SDK v2 the encryption context in S3 requests is set by the parameter 
> [ssekmsEncryptionContext.|https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/CreateMultipartUploadRequest.Builder.html#ssekmsEncryptionContext(java.lang.String)]
>  It receives a base64-encoded UTF-8 string holding JSON with the encryption 
> context key-value pairs. The value of this parameter could be set by the user 
> in a new property {_}*fs.s3a.encryption.context*{_}, and be stored in the 
> [EncryptionSecrets|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/EncryptionSecrets.java]
>  to later be used when setting the encryption parameters in 
> [RequestFactoryImpl|https://github.com/apache/hadoop/blob/f92a8ab8ae54f11946412904973eb60404dee7ff/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868105#comment-17868105
 ] 

ASF GitHub Bot commented on HADOOP-19235:
-

KeeProMise opened a new pull request, #6888:
URL: https://github.com/apache/hadoop/pull/6888

   
   
   ### Description of PR
   please see: https://issues.apache.org/jira/browse/HDFS-17552
   NOTE: This is a sub-pull request (PR) related to HDFS-17531(Asynchronous 
router RPC). For more details or context, please refer to the main issue 
[HDFS-17531](https://issues.apache.org/jira/browse/HDFS-17531)
   More detailed documentation: HDFS-17531 **Router asynchronous rpc 
implementation.pdf** and **Aynchronous router.pdf**
   
   You can also view HDFS-17544 to understand the code of this PR.
   
   ### How was this patch tested?
   new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19235:

Labels: pull-request-available  (was: )

> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Priority: Major
>  Labels: pull-request-available
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868104#comment-17868104
 ] 

ASF GitHub Bot commented on HADOOP-19235:
-

KeeProMise closed pull request #6888: HADOOP-19235. IPC client uses 
CompletableFuture to support asynchronous operations.
URL: https://github.com/apache/hadoop/pull/6888




> IPC client uses CompletableFuture to support asynchronous operations.
> -
>
> Key: HADOOP-19235
> URL: https://issues.apache.org/jira/browse/HADOOP-19235
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Reporter: Jian Zhang
>Priority: Major
>
> h3. Description
> In the implementation of asynchronous Ipc.client, the main methods used 
> include HADOOP-13226, HDFS-10224, etc.
> However, the existing implementation does not support `CompletableFuture`; 
> instead, it relies on setting up callbacks, which can lead to the "callback 
> hell" problem. Using `CompletableFuture` can better organize asynchronous 
> callbacks. Therefore, on the basis of the existing implementation, by using 
> `CompletableFuture`, once the `client.call` is completed, the asynchronous 
> thread handles the response of this call without blocking the main thread.
>  
> *Test*
> new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19235) IPC client uses CompletableFuture to support asynchronous operations.

2024-07-23 Thread Jian Zhang (Jira)
Jian Zhang created HADOOP-19235:
---

 Summary: IPC client uses CompletableFuture to support asynchronous 
operations.
 Key: HADOOP-19235
 URL: https://issues.apache.org/jira/browse/HADOOP-19235
 Project: Hadoop Common
  Issue Type: New Feature
  Components: common
Reporter: Jian Zhang


h3. Description

In the implementation of asynchronous Ipc.client, the main methods used include 
HADOOP-13226, HDFS-10224, etc.

However, the existing implementation does not support `CompletableFuture`; 
instead, it relies on setting up callbacks, which can lead to the "callback 
hell" problem. Using `CompletableFuture` can better organize asynchronous 
callbacks. Therefore, on the basis of the existing implementation, by using 
`CompletableFuture`, once the `client.call` is completed, the asynchronous 
thread handles the response of this call without blocking the main thread.

 

*Test*

new UT  TestAsyncIPC#testAsyncCallWithCompletableFuture()



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19231) add JacksonUtil to centralise some code

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868086#comment-17868086
 ] 

ASF GitHub Bot commented on HADOOP-19231:
-

hadoop-yetus commented on PR #6953:
URL: https://github.com/apache/hadoop/pull/6953#issuecomment-2245374122

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  17m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   5m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  18m  9s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  15m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |  15m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | -1 :x: |  spotbugs  |   0m 58s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 11s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/3/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  40m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  40m 29s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 34s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 25s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/3/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |  18m 16s | 
[/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.  |
   | -1 :x: |  javac  |  18m 16s | 
[/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2.  |
   | -1 :x: |  compile  |  17m 36s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08.  |
   | -1 :x: |  javac  |  17m 36s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6953/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_412-8u412-ga-1~20.04.1-b08.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle

[jira] [Commented] (HADOOP-19218) Avoid DNS lookup while creating IPC Connection object

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868053#comment-17868053
 ] 

ASF GitHub Bot commented on HADOOP-19218:
-

Hexiaoqiao commented on PR #6951:
URL: https://github.com/apache/hadoop/pull/6951#issuecomment-2245166427

   Committed to trunk and branch-3.4. Thanks @virajjasani .




> Avoid DNS lookup while creating IPC Connection object
> -
>
> Key: HADOOP-19218
> URL: https://issues.apache.org/jira/browse/HADOOP-19218
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9, 3.5.0, 3.4.1
>
>
> Been running HADOOP-18628 in production for quite sometime, everything works 
> fine as long as DNS servers in HA are available. Upgrading single NS server 
> at a time is also a common case, not problematic. Every DNS lookup takes 1ms 
> in general.
> However, recently we encountered a case where 2 out of 4 NS servers went down 
> (temporarily but it's a rare case). With small duration DNS cache and 2s of 
> NS fallback timeout configured in resolv.conf, now any client performing DNS 
> lookup can encounter 4s+ delay. This caused namenode outage as listener 
> thread is single threaded and it was not able to keep up with large num of 
> unique clients (in direct proportion with num of DNS resolutions every few 
> seconds) initiating connection on listener port.
> While having 2 out of 4 DNS servers offline is rare case and NS fallback 
> settings could also be improved, it is important to note that we don't need 
> to perform DNS resolution for every new connection if the intention is to 
> improve the insights into VersionMistmatch errors thrown by the server.
> The proposal is the delay the DNS resolution until the server throws the 
> error for incompatible header or version mismatch. This would also help with 
> ~1ms extra time spent even for healthy DNS lookup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19218) Avoid DNS lookup while creating IPC Connection object

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868051#comment-17868051
 ] 

ASF GitHub Bot commented on HADOOP-19218:
-

Hexiaoqiao merged PR #6951:
URL: https://github.com/apache/hadoop/pull/6951




> Avoid DNS lookup while creating IPC Connection object
> -
>
> Key: HADOOP-19218
> URL: https://issues.apache.org/jira/browse/HADOOP-19218
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.9, 3.5.0, 3.4.1
>
>
> Been running HADOOP-18628 in production for quite sometime, everything works 
> fine as long as DNS servers in HA are available. Upgrading single NS server 
> at a time is also a common case, not problematic. Every DNS lookup takes 1ms 
> in general.
> However, recently we encountered a case where 2 out of 4 NS servers went down 
> (temporarily but it's a rare case). With small duration DNS cache and 2s of 
> NS fallback timeout configured in resolv.conf, now any client performing DNS 
> lookup can encounter 4s+ delay. This caused namenode outage as listener 
> thread is single threaded and it was not able to keep up with large num of 
> unique clients (in direct proportion with num of DNS resolutions every few 
> seconds) initiating connection on listener port.
> While having 2 out of 4 DNS servers offline is rare case and NS fallback 
> settings could also be improved, it is important to note that we don't need 
> to perform DNS resolution for every new connection if the intention is to 
> improve the insights into VersionMistmatch errors thrown by the server.
> The proposal is the delay the DNS resolution until the server throws the 
> error for incompatible header or version mismatch. This would also help with 
> ~1ms extra time spent even for healthy DNS lookup.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19234) ABFS: [FnsOverBlob] Adding Integration Tests for Special Scenarios in Blob Endpoint

2024-07-23 Thread Anuj Modi (Jira)
Anuj Modi created HADOOP-19234:
--

 Summary: ABFS: [FnsOverBlob] Adding Integration Tests for Special 
Scenarios in Blob Endpoint
 Key: HADOOP-19234
 URL: https://issues.apache.org/jira/browse/HADOOP-19234
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Anuj Modi


FNS accounts does not understand directories and to create that abstraction 
client has to handle the cases where hdfs operations include interactions with 
directory paths. This needs some additional testing to be done for each HDFS 
operation where path can exists as directory.

More details to follow

Perquisites: 
 # HADOOP-19187 ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting 
both DFS and Blob Endpoint
 # HADOOP-19207 ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs and 
Metadata 
APIs[|https://issues.apache.org/jira/secure/DeleteLink.jspa?id=13579416=13583033=12310460_token=A5KQ-2QAV-T4JA-FDED_17fc7154167b7d6d6490aa6508db554fd6d7af24_lin]
 # HADOOP-19226 ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob 
Endpoint for AbfsBlobClient
 # HADOOP-19232 ABFS: [FnsOverBlob] Implementing Ingress Support with various 
Fallback Handling
 # HADOOP-19233 ABFS: [FnsOverBlob] Implementing Rename and Delete APIs over 
Blob Endpoint
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19234) ABFS: [FnsOverBlob] Adding Integration Tests for Special Scenarios in Blob Endpoint

2024-07-23 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi reassigned HADOOP-19234:
--

Assignee: Anuj Modi

> ABFS: [FnsOverBlob] Adding Integration Tests for Special Scenarios in Blob 
> Endpoint
> ---
>
> Key: HADOOP-19234
> URL: https://issues.apache.org/jira/browse/HADOOP-19234
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>
> FNS accounts does not understand directories and to create that abstraction 
> client has to handle the cases where hdfs operations include interactions 
> with directory paths. This needs some additional testing to be done for each 
> HDFS operation where path can exists as directory.
> More details to follow
> Perquisites: 
>  # HADOOP-19187 ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting 
> both DFS and Blob Endpoint
>  # HADOOP-19207 ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs 
> and Metadata 
> APIs[|https://issues.apache.org/jira/secure/DeleteLink.jspa?id=13579416=13583033=12310460_token=A5KQ-2QAV-T4JA-FDED_17fc7154167b7d6d6490aa6508db554fd6d7af24_lin]
>  # HADOOP-19226 ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob 
> Endpoint for AbfsBlobClient
>  # HADOOP-19232 ABFS: [FnsOverBlob] Implementing Ingress Support with various 
> Fallback Handling
>  # HADOOP-19233 ABFS: [FnsOverBlob] Implementing Rename and Delete APIs over 
> Blob Endpoint
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18542) Azure Token provider requires tenant and client IDs despite being optional

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868031#comment-17868031
 ] 

ASF GitHub Bot commented on HADOOP-18542:
-

hadoop-yetus commented on PR #4262:
URL: https://github.com/apache/hadoop/pull/4262#issuecomment-2245016042

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 38s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 135m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4262 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 18a119403eaf 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a602b81fc2638a75a52df220773e9de9a3c65550 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/14/testReport/ |
   | Max. process+thread count | 703 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/14/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Azure Token provider requires tenant and client IDs despite being optio

[jira] [Commented] (HADOOP-18542) Azure Token provider requires tenant and client IDs despite being optional

2024-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17868028#comment-17868028
 ] 

ASF GitHub Bot commented on HADOOP-18542:
-

hadoop-yetus commented on PR #4262:
URL: https://github.com/apache/hadoop/pull/4262#issuecomment-2244999777

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 127m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.46 ServerAPI=1.46 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4262 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 497df83b6f68 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a602b81fc2638a75a52df220773e9de9a3c65550 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/13/testReport/ |
   | Max. process+thread count | 565 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4262/13/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Azure Token provider requires tenant and client IDs despite be

[jira] [Created] (HADOOP-19233) ABFS: [FnsOverBlob] Implementing Rename and Delete APIs over Blob Endpoint

2024-07-23 Thread Anuj Modi (Jira)
Anuj Modi created HADOOP-19233:
--

 Summary: ABFS: [FnsOverBlob] Implementing Rename and Delete APIs 
over Blob Endpoint
 Key: HADOOP-19233
 URL: https://issues.apache.org/jira/browse/HADOOP-19233
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Anuj Modi
Assignee: Anuj Modi


Enable rename and delete over Blob endpoint. The endpoint does not support 
rename API and not directory-delete. Therefore, all the orchestration and 
handling has to be added on client side.

More details will follow



Perquisites for this Patch:
1. HADOOP-19187 ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting 
both DFS and Blob Endpoint - ASF JIRA (apache.org)

2. HADOOP-19226 ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob 
Endpoint for AbfsBlobClient - ASF JIRA (apache.org)

3. HADOOP-19207 ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs and 
Metadata APIs - ASF JIRA (apache.org)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19232) ABFS: [FnsOverBlob] Implementing Ingress Support with various Fallback Handling

2024-07-23 Thread Anuj Modi (Jira)
Anuj Modi created HADOOP-19232:
--

 Summary: ABFS: [FnsOverBlob] Implementing Ingress Support with 
various Fallback Handling
 Key: HADOOP-19232
 URL: https://issues.apache.org/jira/browse/HADOOP-19232
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Anuj Modi
Assignee: Anmol Asrani


Scope of this task is to refactor the AbfsOutputStream class to handle the 
ingress for DFS and Blob endpoint effectively.

More details will be added soon.

Perquisites for this Patch:
1. [HADOOP-19187] ABFS: [FnsOverBlob]Making AbfsClient Abstract for supporting 
both DFS and Blob Endpoint - ASF JIRA (apache.org)

2. [HADOOP-19226] ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob 
Endpoint for AbfsBlobClient - ASF JIRA (apache.org)

3. [HADOOP-19207] ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs 
and Metadata APIs - ASF JIRA (apache.org)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19207) ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs and Metadata APIs

2024-07-23 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi updated HADOOP-19207:
---
Target Version/s: 3.5.0, 3.4.1  (was: 3.5.0)

> ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs and Metadata APIs
> 
>
> Key: HADOOP-19207
> URL: https://issues.apache.org/jira/browse/HADOOP-19207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>
> Blob Endpoint APIs has a different format for response than DFS Endpoint APIs.
> There are some behavioral differences as well that need to be handled at 
> client side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19207) ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs and Metadata APIs

2024-07-23 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi updated HADOOP-19207:
---
Fix Version/s: (was: 3.5.0)
   (was: 3.4.1)

> ABFS: [FnsOverBlob]Response Handling of Blob Endpoint APIs and Metadata APIs
> 
>
> Key: HADOOP-19207
> URL: https://issues.apache.org/jira/browse/HADOOP-19207
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>
> Blob Endpoint APIs has a different format for response than DFS Endpoint APIs.
> There are some behavioral differences as well that need to be handled at 
> client side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19226) ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob Endpoint for AbfsBlobClient

2024-07-23 Thread Anuj Modi (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anuj Modi updated HADOOP-19226:
---
Summary: ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob Endpoint 
for AbfsBlobClient  (was: ABFS: Implementing Azure Rest APIs on Blob Endpoint 
for AbfsBlobClient)

> ABFS: [FnsOverBlob]Implementing Azure Rest APIs on Blob Endpoint for 
> AbfsBlobClient
> ---
>
> Key: HADOOP-19226
> URL: https://issues.apache.org/jira/browse/HADOOP-19226
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> This is second task in series of tasks for implementing Blob Endpoint support 
> for FNS accounts.
> This patch will have changes to implement all the APIs over Blob Endpoint as 
> a part of implementing AbfsBlobClient.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   4   5   6   7   8   9   10   >