[jira] [Commented] (HADOOP-19290) Operating on / in ChecksumFileSystem throws NPE

2024-09-28 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885584#comment-17885584
 ] 

ASF GitHub Bot commented on HADOOP-19290:
-

ayushtkn merged PR #7074:
URL: https://github.com/apache/hadoop/pull/7074




> Operating on / in ChecksumFileSystem throws NPE
> ---
>
> Key: HADOOP-19290
> URL: https://issues.apache.org/jira/browse/HADOOP-19290
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Operating on / on ChecksumFileSystem throws NPE
> {noformat}
> java.lang.NullPointerException
>   at org.apache.hadoop.fs.Path.(Path.java:151)
>   at org.apache.hadoop.fs.Path.(Path.java:130)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:121)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:774)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setReplication(ChecksumFileSystem.java:884)
> {noformat}
> Internally I observed it for SetPermission but on my Mac LocalFs doesn't let 
> me setPermission on "/", so I reproduced it via SetReplication which goes 
> through the same code path



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885535#comment-17885535
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

slfan1989 commented on PR #7019:
URL: https://github.com/apache/hadoop/pull/7019#issuecomment-2380427389

   I need your help to review this PR together. I know this PR contains many 
changes. What else can I do to help you review it more effectively? Could you 
give me some suggestions? 




> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885506#comment-17885506
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

hadoop-yetus commented on PR #7019:
URL: https://github.com/apache/hadoop/pull/7019#issuecomment-2380355237

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  6s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  6s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  6s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  6s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  jsonlint  |   0m  0s |  |  jsonlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  pathlen  |   0m  0s | 
[/results-pathlen.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/22/artifact/out/results-pathlen.txt)
 |  The patch appears to contain 1 files with names longer than 240  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 115 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 31s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  41m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  18m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   5m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  35m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  29m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | -1 :x: |  javadoc  |   0m 14s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/22/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-yarn-server-timelineservice-hbase-server-2 in trunk failed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +0 :ok: |  spotbugs  |   0m 26s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   0m 49s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/22/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |  12m  2s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/22/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html)
 |  hadoop-yarn-project/hadoop-yarn in trunk has 1 extant spotbugs warnings.  |
   | -1 :x: |  spotbugs  |   1m  1s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/22/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 26s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 24s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 27s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  
branch/hadoop-client-modules/hadoop-client-check-tes

[jira] [Commented] (HADOOP-19287) Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885419#comment-17885419
 ] 

ASF GitHub Bot commented on HADOOP-19287:
-

szetszwo commented on PR #7068:
URL: https://github.com/apache/hadoop/pull/7068#issuecomment-2379651501

   ```java
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/bouncycastle/jce/provider/BouncyCastleProvider
   ```
   @cxzl25 , As you mentioned, HADOOP-19152 should have fixed it when the new 
conf `hadoop.security.crypto.jce.provider.auto-add` is set to false.  Could you 
try it?




> Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class
> ---
>
> Key: HADOOP-19287
> URL: https://issues.apache.org/jira/browse/HADOOP-19287
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Major
>  Labels: pull-request-available
>
>  
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/bouncycastle/jce/provider/BouncyCastleProvider
>     at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.handleKind(KMSClientProvider.java:180)
>     at org.apache.hadoop.security.token.Token.getRenewer(Token.java:467)
>     at org.apache.hadoop.security.token.Token.renew(Token.java:500)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$3(HadoopFSDelegationTokenProvider.scala:147)
>     at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
>     at scala.util.Try$.apply(Try.scala:217)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$2(HadoopFSDelegationTokenProvider.scala:146)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19261) Support force close a DomainSocket for server service

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885420#comment-17885420
 ] 

ASF GitHub Bot commented on HADOOP-19261:
-

jojochuang commented on PR #7057:
URL: https://github.com/apache/hadoop/pull/7057#issuecomment-2379652675

   +1 
   
   This looks like a safe change to make. Thanks Sammi.
   I'm just curious I'm looking at Hadoop qbt test history and TestDomainSocket 
doesn't seem to fail in a long time.
   Could it be that it is failing on Mac but not on Linux?




> Support force close a DomainSocket for server service
> -
>
> Key: HADOOP-19261
> URL: https://issues.apache.org/jira/browse/HADOOP-19261
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Sammi Chen
>Assignee: Sammi Chen
>Priority: Major
>  Labels: pull-request-available
>
> Currently the DomainSocket#close will check the reference count to be 0 
> before it goes on to close the socket. In server service case, server calls 
> DomainSocket#listen will add 1 to the reference count. When trying to close 
> the server socket which is blocked by accept, the close call will doing 
> endless count > 0 check, which prevent the server socket to be closed. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885411#comment-17885411
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

hadoop-yetus commented on PR #7019:
URL: https://github.com/apache/hadoop/pull/7019#issuecomment-2379596519

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  6s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  6s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  6s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  6s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  jsonlint  |   0m  1s |  |  jsonlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  pathlen  |   0m  0s | 
[/results-pathlen.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/21/artifact/out/results-pathlen.txt)
 |  The patch appears to contain 1 files with names longer than 240  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 115 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   5m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  36m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  30m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | -1 :x: |  javadoc  |   0m 17s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/21/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-yarn-server-timelineservice-hbase-server-2 in trunk failed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   0m 47s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/21/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |  11m 19s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/21/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html)
 |  hadoop-yarn-project/hadoop-yarn in trunk has 1 extant spotbugs warnings.  |
   | -1 :x: |  spotbugs  |   1m  1s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/21/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 27s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  
branch/hadoop-client-modules/hadoop-client-check-tes

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885395#comment-17885395
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

surendralilhore merged PR #7076:
URL: https://github.com/apache/hadoop/pull/7076




> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19289) upgrade to protobuf-java 3.25.5 due to CVE-2024-7254

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885382#comment-17885382
 ] 

ASF GitHub Bot commented on HADOOP-19289:
-

steveloughran commented on PR #7072:
URL: https://github.com/apache/hadoop/pull/7072#issuecomment-2379346522

   ok




> upgrade to protobuf-java 3.25.5 due to CVE-2024-7254
> 
>
> Key: HADOOP-19289
> URL: https://issues.apache.org/jira/browse/HADOOP-19289
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> https://github.com/advisories/GHSA-735f-pc8j-v9w8
> Presumably protobuf encoded messages in Hadoop come from trusted sources but 
> it is still useful to upgrade the jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885364#comment-17885364
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

steveloughran merged PR #7071:
URL: https://github.com/apache/hadoop/pull/7071




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19290) Operating on / in ChecksumFileSystem throws NPE

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885328#comment-17885328
 ] 

ASF GitHub Bot commented on HADOOP-19290:
-

hadoop-yetus commented on PR #7074:
URL: https://github.com/apache/hadoop/pull/7074#issuecomment-2379056344

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   8m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |  10m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   9m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 53s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7074/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7074 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9d44f90a7e48 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 
10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0eae37840fc9be608a090f9c21a6b2b889dd57d9 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7074/1/testReport/ |
   | Max. process+thread count | 1273 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7074/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Operating on / in ChecksumFileSystem throws NPE
> ---

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885323#comment-17885323
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

hadoop-yetus commented on PR #7076:
URL: https://github.com/apache/hadoop/pull/7076#issuecomment-2379034438

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  1s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 57s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  88m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7076/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7076 |
   | JIRA Issue | HADOOP-19284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f34601ebf2ac 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 44da465725a7937b1be924d16eec69c1e63ed4e9 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7076/1/testReport/ |
   | Max. process+thread count | 753 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7076/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885294#comment-17885294
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 opened a new pull request, #7076:
URL: https://github.com/apache/hadoop/pull/7076

   ### Description of PR
   Jira: https://issues.apache.org/jira/browse/HADOOP-19284 
   Making config `fs.azure.account.hns.enabled` account specific
   
   There are a few reported requirements where users working with multiple file 
systems need to specify this config either only for some accounts or set it 
differently for different account.
   ABFS driver today does not allow this to be set as account specific config.
   
   This also fixes test that were failing when "fs.azure.account.hns.enabled" 
config was not present.
   
   ### How was this patch tested?
   Existing tests modified and new tests added.




> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19290) Operating on / in ChecksumFileSystem throws NPE

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885282#comment-17885282
 ] 

ASF GitHub Bot commented on HADOOP-19290:
-

ayushtkn opened a new pull request, #7074:
URL: https://github.com/apache/hadoop/pull/7074

   ### Description of PR
   
   Avoid fetching the checksum file for /, / is a directory anyway we don't 
have checksum files for directories & parent of root is ``null`` which leads to 
``NPE`` failing the entire call
   
   ### How was this patch tested?
   
   UT
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Operating on / in ChecksumFileSystem throws NPE
> ---
>
> Key: HADOOP-19290
> URL: https://issues.apache.org/jira/browse/HADOOP-19290
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>
> Operating on / on ChecksumFileSystem throws NPE
> {noformat}
> java.lang.NullPointerException
>   at org.apache.hadoop.fs.Path.(Path.java:151)
>   at org.apache.hadoop.fs.Path.(Path.java:130)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:121)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:774)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setReplication(ChecksumFileSystem.java:884)
> {noformat}
> Internally I observed it for SetPermission but on my Mac LocalFs doesn't let 
> me setPermission on "/", so I reproduced it via SetReplication which goes 
> through the same code path



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19290) Operating on / in ChecksumFileSystem throws NPE

2024-09-27 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19290:

Labels: pull-request-available  (was: )

> Operating on / in ChecksumFileSystem throws NPE
> ---
>
> Key: HADOOP-19290
> URL: https://issues.apache.org/jira/browse/HADOOP-19290
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Operating on / on ChecksumFileSystem throws NPE
> {noformat}
> java.lang.NullPointerException
>   at org.apache.hadoop.fs.Path.(Path.java:151)
>   at org.apache.hadoop.fs.Path.(Path.java:130)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.getChecksumFile(ChecksumFileSystem.java:121)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem$FsOperation.run(ChecksumFileSystem.java:774)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.setReplication(ChecksumFileSystem.java:884)
> {noformat}
> Internally I observed it for SetPermission but on my Mac LocalFs doesn't let 
> me setPermission on "/", so I reproduced it via SetReplication which goes 
> through the same code path



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19287) Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class

2024-09-27 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885252#comment-17885252
 ] 

ASF GitHub Bot commented on HADOOP-19287:
-

Hexiaoqiao commented on PR #7068:
URL: https://github.com/apache/hadoop/pull/7068#issuecomment-2378592175

   Sorry I didn't get the information about issue after review the description 
and patch.
   Quote HADOOP-19152 above, cc @szetszwo would you mind to give a check here? 
Thanks.




> Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class
> ---
>
> Key: HADOOP-19287
> URL: https://issues.apache.org/jira/browse/HADOOP-19287
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Major
>  Labels: pull-request-available
>
>  
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/bouncycastle/jce/provider/BouncyCastleProvider
>     at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.handleKind(KMSClientProvider.java:180)
>     at org.apache.hadoop.security.token.Token.getRenewer(Token.java:467)
>     at org.apache.hadoop.security.token.Token.renew(Token.java:500)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$3(HadoopFSDelegationTokenProvider.scala:147)
>     at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
>     at scala.util.Try$.apply(Try.scala:217)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$2(HadoopFSDelegationTokenProvider.scala:146)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885213#comment-17885213
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

diljotgrewal commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2378411969

   Sorry about the force push. I just did one last force push to undo my 
previous force push and split the changes back into the original commit and 
added a separate commit where I addressed the feedback. I'll stick to merge 
commits from here on out. Hopefully splitting them back will help with the 
review. 
   
   Thanks!




> S3A: Support S3 Conditional Writes
> --
>
> Key: HADOOP-19256
> URL: https://issues.apache.org/jira/browse/HADOOP-19256
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Conditional Write (Put-if-absent) capability is now generally available - 
> [https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/]
>  
> S3A should allow passing in this put-if-absent header to prevent over writing 
> of files. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885209#comment-17885209
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

surendralilhore merged PR #7062:
URL: https://github.com/apache/hadoop/pull/7062




> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885064#comment-17885064
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

szetszwo commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2377308684

   @steveloughran , what do you think?




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885028#comment-17885028
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

hadoop-yetus commented on PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#issuecomment-2377040311

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7062 |
   | JIRA Issue | HADOOP-19284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 11e8374e2147 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 
10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f3d371f7cf916e362e8b370a3754b77000af5f22 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/5/testReport/ |
   | Max. process+thread count | 551 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> ABFS: Allow "fs.azure.account.hns.enabled" 

[jira] [Commented] (HADOOP-19289) upgrade to protobuf-java 3.25.5 due to CVE-2024-7254

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885021#comment-17885021
 ] 

ASF GitHub Bot commented on HADOOP-19289:
-

pjfanning commented on PR #7072:
URL: https://github.com/apache/hadoop/pull/7072#issuecomment-2376953707

   > Let's do a new release of the third party lib before this
   
   With hadoop-thirdparty, can we call the next release 1.3.1? The upgrade from 
1.3.0 isn't big. The current version is 1.4.0-SNAPSHOT.




> upgrade to protobuf-java 3.25.5 due to CVE-2024-7254
> 
>
> Key: HADOOP-19289
> URL: https://issues.apache.org/jira/browse/HADOOP-19289
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> https://github.com/advisories/GHSA-735f-pc8j-v9w8
> Presumably protobuf encoded messages in Hadoop come from trusted sources but 
> it is still useful to upgrade the jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19289) upgrade to protobuf-java 3.25.5 due to CVE-2024-7254

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885018#comment-17885018
 ] 

ASF GitHub Bot commented on HADOOP-19289:
-

steveloughran commented on PR #7072:
URL: https://github.com/apache/hadoop/pull/7072#issuecomment-2376917859

   Let's do a new release of the third party lib before this




> upgrade to protobuf-java 3.25.5 due to CVE-2024-7254
> 
>
> Key: HADOOP-19289
> URL: https://issues.apache.org/jira/browse/HADOOP-19289
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> https://github.com/advisories/GHSA-735f-pc8j-v9w8
> Presumably protobuf encoded messages in Hadoop come from trusted sources but 
> it is still useful to upgrade the jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885019#comment-17885019
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

sarvekshayr commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2376920101

   > This checkstyle warning seem a false positive. The current indentation 
level 8 is better than the suggested level 6. No?
   
   Yes, indentation level 8 is correct and this change was not introduced in 
this PR.
   Retaining this as it is.




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885015#comment-17885015
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

szetszwo commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2376895254

   > 
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:172:
LOG.debug("{} metrics system started in standby mode", prefix);: 
'block' child has incorrect indentation level 8, expected level should be 6. 
[Indentation]
   
   This checkstyle warning seem a false positive.  The current indentation 
level 8 is better than the suggest level 6.  No?
   
   Anyway, this is an existing problem but not introduced by this pr.  If it is 
needed, let's fix it later.
   ```java
   switch (initMode()) {
 case NORMAL:
   try { start(); }
   catch (MetricsConfigException e) {
 // Configuration errors (e.g., typos) should not be fatal.
 // We can always start the metrics system later via JMX.
 LOG.warn("Metrics system not started: "+ e.getMessage());
 LOG.debug("Stacktrace: ", e);
   }
   break;
 case STANDBY:
   //line 172 in this pr
   LOG.debug("{} metrics system started in standby mode", prefix);
   }
   ```




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885003#comment-17885003
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

steveloughran commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376887483

   @diljotgrewal except in the special case of "massive merge conflict 
requiring a rebase", can you just use merge commits once we are in the review 
phase. Github lets me review changes happened since my last review -but it 
cannot do this with forced pushes. This makes my life harder and reduces the 
frequency of me actually looking at patches. 
   
   I will review later




> S3A: Support S3 Conditional Writes
> --
>
> Key: HADOOP-19256
> URL: https://issues.apache.org/jira/browse/HADOOP-19256
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Conditional Write (Put-if-absent) capability is now generally available - 
> [https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/]
>  
> S3A should allow passing in this put-if-absent header to prevent over writing 
> of files. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885001#comment-17885001
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

szetszwo commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2376878605

   > -1 ❌ | test4tests | 0m 0s |   | The patch doesn't appear to include any 
new or modified tests. ...
   
   Since this just is changing log level/messages, no new tests are needed.
   
   
   
   




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17885000#comment-17885000
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

steveloughran commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2376874295

   minor checkstyle
   ```
   
Six./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java:172:
LOG.debug("{} metrics system started in standby mode", prefix);: 
'block' child has incorrect indentation level 8, expected level should be 6. 
[Indentation]
   ```
   




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884986#comment-17884986
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

hadoop-yetus commented on PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#issuecomment-2376806463

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/4/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 2 unchanged - 0 
fixed = 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 11s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 24s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7062 |
   | JIRA Issue | HADOOP-19284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 464ae3e5451e 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 
10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cc45f99056e08952fd10aba12461eb3ac472364c |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/4/testReport/ |
   | Max. process

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884969#comment-17884969
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#issuecomment-2376732014

   Updated test results:
   
   
   [ERROR] 
testBackoffRetryMetrics(org.apache.hadoop.fs.azurebfs.services.TestAbfsRestOperation)
  Time elapsed: 1.268 s  <<< ERROR!
   [ERROR] 
testReadFooterMetrics(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics) 
 Time elapsed: 1.153 s  <<< ERROR!
   [ERROR] 
testMetricWithIdlePeriod(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.135 s  <<< ERROR!
   [ERROR] 
testReadFooterMetricsWithParquetAndNonParquet(org.apache.hadoop.fs.azurebfs.ITestAbfsReadFooterMetrics)
  Time elapsed: 1.132 s  <<< ERROR!
   [ERROR] 
testTwoWritersCreateAppendWithInfiniteLeaseEnabled(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemLease)
  Time elapsed: 91.757 s  <<< ERROR!
   




> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884961#comment-17884961
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#issuecomment-2376630524

   Thank you for the reviews @surendralilhore @bhattmanish98 @anmolanmol1234 
   I have addressed/taken comments here.
   
   I have also refactored the test added to reduce code redundancy and improve 
scenario coverage. Hope this looks better now.




> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884952#comment-17884952
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776811993


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -451,7 +451,9 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
   }
 
   public Trilean getIsNamespaceEnabledAccount() {
-return Trilean.getTrilean(isNamespaceEnabledAccount);
+String isNamespaceEnabledAccountString

Review Comment:
   Going with @bhattmanish98 's suggestion below to get rid of local variable 
itself.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884951#comment-17884951
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776811189


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -450,8 +450,17 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
 this(rawConfig, accountName, AbfsServiceType.DFS);
   }
 
+  /**
+   * Returns the account type as per the user configuration. Gets the account
+   * specific value if it exists, then looks for an account agnostic value.
+   * If not configured driver makes additional getAcl call to determine
+   * the account type during file system initialization.
+   * @return TRUE/FALSE value if configured, UNKNOWN if not configured.
+   */
   public Trilean getIsNamespaceEnabledAccount() {
-return Trilean.getTrilean(isNamespaceEnabledAccount);
+String isNamespaceEnabledAccountString
+= getString(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
isNamespaceEnabledAccount);
+return Trilean.getTrilean(isNamespaceEnabledAccountString);

Review Comment:
   Yes, that sounds better





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884934#comment-17884934
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

bhattmanish98 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776701095


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -450,8 +450,17 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
 this(rawConfig, accountName, AbfsServiceType.DFS);
   }
 
+  /**
+   * Returns the account type as per the user configuration. Gets the account
+   * specific value if it exists, then looks for an account agnostic value.
+   * If not configured driver makes additional getAcl call to determine
+   * the account type during file system initialization.
+   * @return TRUE/FALSE value if configured, UNKNOWN if not configured.
+   */
   public Trilean getIsNamespaceEnabledAccount() {
-return Trilean.getTrilean(isNamespaceEnabledAccount);
+String isNamespaceEnabledAccountString
+= getString(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
isNamespaceEnabledAccount);
+return Trilean.getTrilean(isNamespaceEnabledAccountString);

Review Comment:
   Since the variable isNamespaceEnabledAccountString is used at one place 
only, shouldn't it be better to make it inplace call instead of creating new 
variable?





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884916#comment-17884916
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776591303


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -451,7 +451,9 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
   }
 
   public Trilean getIsNamespaceEnabledAccount() {
-return Trilean.getTrilean(isNamespaceEnabledAccount);
+String isNamespaceEnabledAccountString

Review Comment:
   Ohh okay...
   Will do that.
   Thanks for clarifying.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884918#comment-17884918
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

hadoop-yetus commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2376263250

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   8m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   8m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   8m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7071/2/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 39 
unchanged - 1 fixed = 40 total (was 40)  |
   | +1 :green_heart: |  mvnsite  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 137m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7071/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7071 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 1be507f4b46f 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 296decfbcfe73a824388e225e892ba819bd4882b |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7071/2/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: ha

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884914#comment-17884914
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

surendralilhore commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776587888


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -451,7 +451,9 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
   }
 
   public Trilean getIsNamespaceEnabledAccount() {
-return Trilean.getTrilean(isNamespaceEnabledAccount);
+String isNamespaceEnabledAccountString

Review Comment:
   "I am just asking you to rename the newly introduced local variable 
**isNamespaceEnabledAccountString** to **isNamespaceEnabled**, not the method 
name.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884903#comment-17884903
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

hadoop-yetus commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376159506

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 64 new + 1 unchanged - 0 fixed 
= 65 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 29s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 with 
JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 26s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/10/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05 
with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   1m 11s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/10/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  35m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 46s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 142m 26s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to finalizedRequest in 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()  At 
S3ABlockOutputStream.java:org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()
  At S3ABloc

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884898#comment-17884898
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776526750


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -451,7 +451,9 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
   }
 
   public Trilean getIsNamespaceEnabledAccount() {
-return Trilean.getTrilean(isNamespaceEnabledAccount);
+String isNamespaceEnabledAccountString

Review Comment:
   Hi, it seems fine to keep this name as well as it clarifies that it is the 
Account's property that we are talking about not a Filesystem's property.
   
   Besides, this has been the day 0 code and changing it would lead to a lot of 
code changes everywhere where getter and setters are used. We have followed 
same naming structure for this throughout driver code.
   
   Would love to know your thoughts here.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884896#comment-17884896
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776521467


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java:
##
@@ -271,4 +275,52 @@ private void 
ensureGetAclDetermineHnsStatusAccuratelyInternal(int statusCode,
 Mockito.verify(mockClient, times(1))
 .getAclStatus(anyString(), any(TracingContext.class));
   }
+
+  @Test
+  public void testAccountSpecificConfig() throws Exception {
+Configuration rawConfig = new Configuration();
+rawConfig.addResource(TEST_CONFIGURATION_FILE_NAME);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED,
+this.getAccountName()));
+String accountName1 = "account1.dfs.core.windows.net";
+String accountName2 = "account2.dfs.core.windows.net";
+String accountName3 = "account3.dfs.core.windows.net";
+String defaultUri1 = this.getTestUrl().replace(this.getAccountName(), 
accountName1);
+String defaultUri2 = this.getTestUrl().replace(this.getAccountName(), 
accountName2);
+String defaultUri3 = this.getTestUrl().replace(this.getAccountName(), 
accountName3);
+
+// Set both account specific and account agnostic config for account 1
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName1), FALSE_STR);
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, TRUE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri1);
+AzureBlobFileSystem fs1 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account specific config takes precedence
+Assertions.assertThat(getIsNamespaceEnabled(fs1)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+
+// Set only the account specific config for account 2
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName2), FALSE_STR);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri2);
+AzureBlobFileSystem fs2 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account specific config is enough.
+Assertions.assertThat(getIsNamespaceEnabled(fs2)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+
+// Set only account agnostic config for account 3
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, FALSE_STR);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName3));
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri3);
+AzureBlobFileSystem fs3 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account agnostic config is enough.
+Assertions.assertThat(getIsNamespaceEnabled(fs3)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is not set").isFalse();
+fs1.close();
+fs2.close();
+fs3.close();

Review Comment:
   We already have a similar test added: 
https://github.com/apache/hadoop/blob/49a495803a9451850b8982317e277b605c785587/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java#L186
   
   But it makes sense to add this condition here as well. Will add.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884895#comment-17884895
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776518693


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java:
##
@@ -271,4 +275,52 @@ private void 
ensureGetAclDetermineHnsStatusAccuratelyInternal(int statusCode,
 Mockito.verify(mockClient, times(1))
 .getAclStatus(anyString(), any(TracingContext.class));
   }
+
+  @Test
+  public void testAccountSpecificConfig() throws Exception {
+Configuration rawConfig = new Configuration();
+rawConfig.addResource(TEST_CONFIGURATION_FILE_NAME);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED,
+this.getAccountName()));
+String accountName1 = "account1.dfs.core.windows.net";
+String accountName2 = "account2.dfs.core.windows.net";
+String accountName3 = "account3.dfs.core.windows.net";
+String defaultUri1 = this.getTestUrl().replace(this.getAccountName(), 
accountName1);
+String defaultUri2 = this.getTestUrl().replace(this.getAccountName(), 
accountName2);
+String defaultUri3 = this.getTestUrl().replace(this.getAccountName(), 
accountName3);
+
+// Set both account specific and account agnostic config for account 1
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName1), FALSE_STR);
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, TRUE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri1);
+AzureBlobFileSystem fs1 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);

Review Comment:
   Great suggestion, will take this.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884894#comment-17884894
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776518345


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemE2E.java:
##
@@ -259,6 +260,9 @@ public void testHttpReadTimeout() throws Exception {
 
   public void testHttpTimeouts(int connectionTimeoutMs, int readTimeoutMs)
   throws Exception {
+// This is to make sure File System creation goes through before network 
calls start failing.
+assumeValidTestConfigPresent(this.getRawConfiguration(), 
FS_AZURE_ACCOUNT_IS_HNS_ENABLED);

Review Comment:
   Yes, without this change the test was failing while creating file system 
itself when someone skips adding this config.
   Without this config, FS needs to do getAcl() and getAcl() call was failing 
with timeout which is what this test is supposed to do. With this config 
getAcl() call won't be needed and FS creation will be successful. Post that we 
can assert on he timeout failures.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884893#comment-17884893
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

hadoop-yetus commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376123688

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 55s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 24s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 64 new + 1 unchanged - 0 fixed 
= 65 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 32s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/5/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 with 
JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 30s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/5/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05 
with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   1m 28s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/5/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  40m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 163m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to finalizedRequest in 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()  At 
S3ABlockOutputStream.java:org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()
  At S3ABlockOut

[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884892#comment-17884892
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

hadoop-yetus commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376123380

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   4m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 25s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/6/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 64 new + 1 unchanged - 0 fixed 
= 65 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 32s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/6/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 with 
JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 26s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/6/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05 
with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   1m 28s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/6/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  40m 46s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  4s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 149m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to finalizedRequest in 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()  At 
S3ABlockOutputStream.java:org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()
  At S3ABlockOut

[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884889#comment-17884889
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

hadoop-yetus commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376107686

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  51m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 20s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 68 new + 1 unchanged - 0 fixed 
= 69 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 29s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/4/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 with 
JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 26s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/4/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05 
with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   1m 11s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/4/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  39m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 42s |  |  hadoop-aws in the patch passed. 
 |
   | -1 :x: |  asflicense  |   0m 37s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/4/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 162m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to finalizedRequest in 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putO

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884879#comment-17884879
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

hadoop-yetus commented on PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#issuecomment-2376057241

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 58s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7062 |
   | JIRA Issue | HADOOP-19284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b5ab8373b644 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 52f928d751b534ad015c549ea9d765eeab2ee9ab |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/3/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> ABFS: Allow "fs.azure.account.hns.enabled" 

[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884877#comment-17884877
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

hadoop-yetus commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376055827

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 11s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/9/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 67 new + 5 unchanged - 0 fixed 
= 72 total (was 5)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 17s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 with 
JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 15s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/9/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05 
with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   0m 44s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/9/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  23m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  91m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to finalizedRequest in 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()  At 
S3ABlockOutputStream.java:org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()
  At S3ABlockOut

[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884876#comment-17884876
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

hadoop-yetus commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376051588

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 12s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/8/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 67 new + 5 unchanged - 0 fixed 
= 72 total (was 5)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 16s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/8/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 with 
JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 15s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/8/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05 
with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   0m 45s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/8/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  23m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 52s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 22s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  91m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to finalizedRequest in 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()  At 
S3ABlockOutputStream.java:org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()
  At S3ABlockOut

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884875#comment-17884875
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

surendralilhore commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776440406


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemE2E.java:
##
@@ -259,6 +260,9 @@ public void testHttpReadTimeout() throws Exception {
 
   public void testHttpTimeouts(int connectionTimeoutMs, int readTimeoutMs)
   throws Exception {
+// This is to make sure File System creation goes through before network 
calls start failing.
+assumeValidTestConfigPresent(this.getRawConfiguration(), 
FS_AZURE_ACCOUNT_IS_HNS_ENABLED);

Review Comment:
   Why this change is required?, is this failing without this change ?



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -451,7 +451,9 @@ public AbfsConfiguration(final Configuration rawConfig, 
String accountName)
   }
 
   public Trilean getIsNamespaceEnabledAccount() {
-return Trilean.getTrilean(isNamespaceEnabledAccount);
+String isNamespaceEnabledAccountString

Review Comment:
   Can you change the variable name to something simpler, like 
isNamespaceEnabled?



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java:
##
@@ -271,4 +275,52 @@ private void 
ensureGetAclDetermineHnsStatusAccuratelyInternal(int statusCode,
 Mockito.verify(mockClient, times(1))
 .getAclStatus(anyString(), any(TracingContext.class));
   }
+
+  @Test
+  public void testAccountSpecificConfig() throws Exception {
+Configuration rawConfig = new Configuration();
+rawConfig.addResource(TEST_CONFIGURATION_FILE_NAME);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED,
+this.getAccountName()));
+String accountName1 = "account1.dfs.core.windows.net";
+String accountName2 = "account2.dfs.core.windows.net";
+String accountName3 = "account3.dfs.core.windows.net";
+String defaultUri1 = this.getTestUrl().replace(this.getAccountName(), 
accountName1);
+String defaultUri2 = this.getTestUrl().replace(this.getAccountName(), 
accountName2);
+String defaultUri3 = this.getTestUrl().replace(this.getAccountName(), 
accountName3);
+
+// Set both account specific and account agnostic config for account 1
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName1), FALSE_STR);
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, TRUE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri1);
+AzureBlobFileSystem fs1 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account specific config takes precedence
+Assertions.assertThat(getIsNamespaceEnabled(fs1)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+
+// Set only the account specific config for account 2
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName2), FALSE_STR);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri2);
+AzureBlobFileSystem fs2 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account specific config is enough.
+Assertions.assertThat(getIsNamespaceEnabled(fs2)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+
+// Set only account agnostic config for account 3
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, FALSE_STR);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName3));
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri3);
+AzureBlobFileSystem fs3 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account agnostic config is enough.
+Assertions.assertThat(getIsNamespaceEnabled(fs3)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is not set").isFalse();
+fs1.close();
+fs2.close();
+fs3.close();

Review Comment:
   can you check one more condition where account level and common property is 
not set? So it will call the *getAclStatus()* to check the namespace enabled or 
not.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java:
##
@@ -271,4 +275,52 @@ private void 
ensureGetAclDetermineHnsStatusAccuratelyInternal(int statusCode,
 Mockito.ve

[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884873#comment-17884873
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

hadoop-yetus commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2376034913

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 10s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/7/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 64 new + 1 unchanged - 0 fixed 
= 65 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 15s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/7/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04.txt)
 |  hadoop-tools_hadoop-aws-jdkUbuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 with 
JDK Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0)  |
   | -1 :x: |  javadoc  |   0m 16s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/7/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05 
with JDK Private Build-1.8.0_422-8u422-b05-1~20.04-b05 generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | -1 :x: |  spotbugs  |   0m 41s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7011/7/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  24m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 50s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 21s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  92m  9s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Dead store to finalizedRequest in 
org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()  At 
S3ABlockOutputStream.java:org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject()
  At S3ABlockOut

[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884868#comment-17884868
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

sarvekshayr commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2376029850

   @szetszwo thank you for the review. Addressed all the changes from 
[7071_review.patch](https://issues.apache.org/jira/secure/attachment/13071767/7071_review.patch).




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19289) upgrade to protobuf-java 3.25.5 due to CVE-2024-7254

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884853#comment-17884853
 ] 

ASF GitHub Bot commented on HADOOP-19289:
-

hadoop-yetus commented on PR #7072:
URL: https://github.com/apache/hadoop/pull/7072#issuecomment-2375981231

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   8m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  mvnsite  |  15m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   5m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  shadedclient  |  30m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  17m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   8m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   8m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   9m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   5m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   5m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  shadedclient  |  31m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 583m  0s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7072/1/artifact/out/patch-unit-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 759m 30s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Unreaped Processes | root:2 |
   | Failed junit tests | 
hadoop.yarn.server.timelineservice.security.TestTimelineAuthFilterForV2 |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7072/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7072 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux 74a50cbe0d09 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 
10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0f0398c5b6bb9d40a7114e5e9cd95717d57e7b8e |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   | Unreaped Processes Log | 
https://ci-hadoop.apache.org/job/hadoop-mult

[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884850#comment-17884850
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

diljotgrewal commented on PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#issuecomment-2375891992

   @steveloughran 
   Thank you so much for taking the time to thoroughly review the changes. I've 
updated the code to address the requested changes. Can you please provide some 
additional details/feedback on this 
[comment](https://github.com/apache/hadoop/pull/7011#discussion_r1769125158) 
please?  




> S3A: Support S3 Conditional Writes
> --
>
> Key: HADOOP-19256
> URL: https://issues.apache.org/jira/browse/HADOOP-19256
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Conditional Write (Put-if-absent) capability is now generally available - 
> [https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/]
>  
> S3A should allow passing in this put-if-absent header to prevent over writing 
> of files. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884849#comment-17884849
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

diljotgrewal commented on code in PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#discussion_r1776365804


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java:
##
@@ -517,12 +518,22 @@ public CreateMultipartUploadRequest.Builder 
newMultipartUploadRequestBuilder(
   public CompleteMultipartUploadRequest.Builder 
newCompleteMultipartUploadRequestBuilder(
   String destKey,
   String uploadId,
-  List partETags) {
+  List partETags,
+  PutObjectOptions putOptions) {
+
 // a copy of the list is required, so that the AWS SDK doesn't
 // attempt to sort an unmodifiable list.
-CompleteMultipartUploadRequest.Builder requestBuilder =
-
CompleteMultipartUploadRequest.builder().bucket(bucket).key(destKey).uploadId(uploadId)
+CompleteMultipartUploadRequest.Builder requestBuilder;
+Map optionHeaders = putOptions.getHeaders();

Review Comment:
   Could you please provide some additional details? I took a stab at it 
[here](https://github.com/diljotgrewal/hadoop/commit/0cf89eb5a06ec5b8be83af41aee07d8c2bed959e)
 but I'm not sure if I'm on the right track. 
   
   Thanks!!





> S3A: Support S3 Conditional Writes
> --
>
> Key: HADOOP-19256
> URL: https://issues.apache.org/jira/browse/HADOOP-19256
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Conditional Write (Put-if-absent) capability is now generally available - 
> [https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/]
>  
> S3A should allow passing in this put-if-absent header to prevent over writing 
> of files. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884848#comment-17884848
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1776364962


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java:
##
@@ -271,4 +275,44 @@ private void 
ensureGetAclDetermineHnsStatusAccuratelyInternal(int statusCode,
 Mockito.verify(mockClient, times(1))
 .getAclStatus(anyString(), any(TracingContext.class));
   }
+
+  @Test
+  public void testAccountSpecificConfig() throws Exception {
+Configuration rawConfig = new Configuration();
+rawConfig.addResource(TEST_CONFIGURATION_FILE_NAME);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED,
+this.getAccountName()));
+String accountName1 = "account1.dfs.core.windows.net";
+String accountName2 = "account2.dfs.core.windows.net";
+String accountName3 = "account3.dfs.core.windows.net";
+String defaultUri1 = this.getTestUrl().replace(this.getAccountName(), 
accountName1);
+String defaultUri2 = this.getTestUrl().replace(this.getAccountName(), 
accountName2);
+String defaultUri3 = this.getTestUrl().replace(this.getAccountName(), 
accountName3);
+
+// Set account specific config for account 1
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName1), TRUE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri1);
+AzureBlobFileSystem fs1 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+
+// Set account specific config for account 2
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName2), FALSE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri2);
+AzureBlobFileSystem fs2 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+
+// Set account agnostic config for account 3
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, FALSE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri3);
+AzureBlobFileSystem fs3 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+
+Assertions.assertThat(getIsNamespaceEnabled(fs1)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isTrue();
+Assertions.assertThat(getIsNamespaceEnabled(fs2)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+Assertions.assertThat(getIsNamespaceEnabled(fs3)).describedAs(

Review Comment:
   Not sure if this comment was before my latest commit. So, it might be not 
applicable now.
   But still, we are setting account-agnostic setting here so getAcl won't be 
needed.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884756#comment-17884756
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

hadoop-yetus commented on PR #7019:
URL: https://github.com/apache/hadoop/pull/7019#issuecomment-2375081319

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  5s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  5s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  5s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  5s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  jsonlint  |   0m  0s |  |  jsonlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  pathlen  |   0m  0s | 
[/results-pathlen.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/20/artifact/out/results-pathlen.txt)
 |  The patch appears to contain 1 files with names longer than 240  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 115 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  39m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  18m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   5m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  34m  5s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  28m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | -1 :x: |  javadoc  |   0m 14s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/20/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-yarn-server-timelineservice-hbase-server-2 in trunk failed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   0m 41s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/20/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |  11m 42s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/20/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html)
 |  hadoop-yarn-project/hadoop-yarn in trunk has 1 extant spotbugs warnings.  |
   | -1 :x: |  spotbugs  |   1m  0s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/20/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 26s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 26s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 27s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 24s |  |  
branch/hadoop-client-modules/hadoop-client-check-tes

[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884746#comment-17884746
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

steveloughran commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2374957216

   I support this work




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884695#comment-17884695
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

hadoop-yetus commented on PR #7019:
URL: https://github.com/apache/hadoop/pull/7019#issuecomment-2374817455

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  6s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  6s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  6s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  6s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  jsonlint  |   0m  0s |  |  jsonlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  pathlen  |   0m  0s | 
[/results-pathlen.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/19/artifact/out/results-pathlen.txt)
 |  The patch appears to contain 1 files with names longer than 240  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 115 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  16m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   4m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  33m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  29m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | -1 :x: |  javadoc  |   0m 15s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/19/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-yarn-server-timelineservice-hbase-server-2 in trunk failed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +0 :ok: |  spotbugs  |   0m 27s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   0m 47s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/19/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |  12m 23s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/19/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html)
 |  hadoop-yarn-project/hadoop-yarn in trunk has 1 extant spotbugs warnings.  |
   | -1 :x: |  spotbugs  |   1m  0s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/19/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 24s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 26s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 25s |  |  
branch/hadoop-client-modules/hadoop-client-integrati

[jira] [Commented] (HADOOP-19289) upgrade to protobuf-java 3.25.5 due to CVE-2024-7254

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884678#comment-17884678
 ] 

ASF GitHub Bot commented on HADOOP-19289:
-

pjfanning opened a new pull request, #7072:
URL: https://github.com/apache/hadoop/pull/7072

   
   
   ### Description of PR
   
   HADOOP-19289
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> upgrade to protobuf-java 3.25.5 due to CVE-2024-7254
> 
>
> Key: HADOOP-19289
> URL: https://issues.apache.org/jira/browse/HADOOP-19289
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/advisories/GHSA-735f-pc8j-v9w8
> Presumably protobuf encoded messages in Hadoop come from trusted sources but 
> it is still useful to upgrade the jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19289) upgrade to protobuf-java 3.25.5 due to CVE-2024-7254

2024-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19289:

Labels: pull-request-available  (was: )

> upgrade to protobuf-java 3.25.5 due to CVE-2024-7254
> 
>
> Key: HADOOP-19289
> URL: https://issues.apache.org/jira/browse/HADOOP-19289
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://github.com/advisories/GHSA-735f-pc8j-v9w8
> Presumably protobuf encoded messages in Hadoop come from trusted sources but 
> it is still useful to upgrade the jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884654#comment-17884654
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

steveloughran commented on code in PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#discussion_r1775498471


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -510,6 +514,16 @@ public void testCentralEndpointAndNullRegionFipsWithCRUD() 
throws Throwable {
 assertOpsUsingNewFs();
   }
 
+  /**
+   * Skip the test if the region is sa-east-1.
+   */
+  private void skipCrossRegionTest() throws IOException {
+String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
+if (SA_EAST_1.equals(region)) {

Review Comment:
   needs a story for testing with third party stores. There I have a region 
like "unknown" or "test" (but not empty string)
   
   Maybe, rather than be clever here, declare that for testing the region 
should be declared as "non-aws", with
   * the new string stuck in S3ATestConstants
   * the skip test here expanded to check for it
   * the testing.md doc updated to cover this
   
   everything else will take any string as a region, at least of those I've 
tested
   



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -350,32 +353,33 @@ public void 
testCentralEndpointAndDifferentRegionThanBucket() throws Throwable {
   @Test
   public void testWithOutCrossRegionAccess() throws Exception {
 describe("Verify cross region access fails when disabled");
+// skip the test if the region is sa-east-1
+skipCrossRegionTest();
 final Configuration newConf = new Configuration(getConfiguration());
-// skip the test if the region is eu-west-2
-String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
-if (EU_WEST_2.equals(region)) {
-  return;
-}
 // disable cross region access
 newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, false);
-newConf.set(AWS_REGION, EU_WEST_2);
-S3AFileSystem fs = new S3AFileSystem();
-fs.initialize(getFileSystem().getUri(), newConf);
-intercept(AWSRedirectException.class,
-"does not match the AWS region containing the bucket",
-() -> fs.exists(getFileSystem().getWorkingDirectory()));
+newConf.set(AWS_REGION, SA_EAST_1);
+try (S3AFileSystem fs = new S3AFileSystem()) {
+  fs.initialize(getFileSystem().getUri(), newConf);
+  intercept(AWSRedirectException.class,
+  "does not match the AWS region containing the bucket",
+  () -> fs.exists(getFileSystem().getWorkingDirectory()));
+}
   }
 
   @Test
   public void testWithCrossRegionAccess() throws Exception {
 describe("Verify cross region access succeed when enabled");
+// skip the test if the region is sa-east-1
+skipCrossRegionTest();
 final Configuration newConf = new Configuration(getConfiguration());

Review Comment:
   call
   ```
   removeBaseAndBucketOverrides(newConf,
  AWS_S3_CROSS_REGION_ACCESS_ENABLE,
  AWS_REGION)
   ```
   
   needed to strip out per bucket settings of these options.





> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enabled.
> The proposal here is to carves out cross region access as a separate config 
> and enable/disable it irrespective of region/endpoint is set. This gives more 
> flexibility to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884652#comment-17884652
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

szetszwo commented on code in PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#discussion_r1775489764


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsConfig.java:
##
@@ -135,7 +135,7 @@ static MetricsConfig loadFirst(String prefix, String... 
fileNames) {
 throw new MetricsConfigException(e);
   }
 }
-LOG.warn("Cannot locate configuration: tried " +
+LOG.debug("Cannot locate configuration: tried " +
  Joiner.on(",").join(fileNames));

Review Comment:
   Let's also replace `Joiner` with `Arrays.asList(..)` and use `{}`.
   ```java
   LOG.debug("Cannot locate configuration: tried {}", 
Arrays.asList(fileNames));
   ```





> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Assignee: Sarveksha Yeshavantha Raju
>Priority: Major
>  Labels: newbie, pull-request-available
> Attachments: 7071_review.patch
>
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884583#comment-17884583
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

hadoop-yetus commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2373881210

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   8m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   8m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   8m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 12s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7071/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7071 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3220ae6f9c3b 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 21229b5c92665a48701fd7af1b63ad7c8f2da279 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7071/1/testReport/ |
   | Max. process+thread count | 1263 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7071/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.

[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884552#comment-17884552
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

sarvekshayr commented on PR #7071:
URL: https://github.com/apache/hadoop/pull/7071#issuecomment-2373569387

   @szetszwo please review this PR. Thank you!




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Priority: Major
>  Labels: newbie, pull-request-available
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884551#comment-17884551
 ] 

ASF GitHub Bot commented on HADOOP-19281:
-

sarvekshayr opened a new pull request, #7071:
URL: https://github.com/apache/hadoop/pull/7071

   
   
   ### Description of PR
   Adjusted the log level of specific messages from `MetricsSystemImpl` and 
`MetricsConfig` to prevent unrelated logs from cluttering the output.
   
   ### How was this patch tested?
   This patch was tested by running the test classes associated with the above 
classes.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?




> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Priority: Major
>  Labels: newbie
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19281) MetricsSystemImpl should not print INFO message in CLI

2024-09-25 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19281:

Labels: newbie pull-request-available  (was: newbie)

> MetricsSystemImpl should not print INFO message in CLI
> --
>
> Key: HADOOP-19281
> URL: https://issues.apache.org/jira/browse/HADOOP-19281
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Tsz-wo Sze
>Priority: Major
>  Labels: newbie, pull-request-available
>
> Below is an example:
> {code}
> # hadoop fs  -Dfs.s3a.bucket.probe=0 
> -Dfs.s3a.change.detection.version.required=false 
> -Dfs.s3a.change.detection.mode=none -Dfs.s3a.endpoint=http://some.site:9878 
> -Dfs.s3a.access.keysome=systest -Dfs.s3a.secret.key=8...1 
> -Dfs.s3a.endpoint=http://some.site:9878  -Dfs.s3a.path.style.access=true 
> -Dfs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem   -ls  -R s3a://bucket1/
> 24/09/17 10:47:48 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot 
> period at 10 second(s).
> 24/09/17 10:47:48 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> started
> 24/09/17 10:47:48 WARN impl.ConfigurationHelper: Option 
> fs.s3a.connection.establish.timeout is too low (5,000 ms). Setting to 15,000 
> ms instead
> 24/09/17 10:47:50 WARN s3.S3TransferManager: The provided S3AsyncClient is an 
> instance of MultipartS3AsyncClient, and thus multipart download feature is 
> not enabled. To benefit from all features, consider using 
> S3AsyncClient.crtBuilder().build() instead.
> drwxrwxrwx   - root root  0 2024-09-17 10:47 s3a://bucket1/dir1
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: Stopping s3a-file-system 
> metrics system...
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> stopped.
> 24/09/17 10:47:53 INFO impl.MetricsSystemImpl: s3a-file-system metrics system 
> shutdown complete. 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884549#comment-17884549
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

hadoop-yetus commented on PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#issuecomment-2373552079

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 45s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 130m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7067/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7067 |
   | JIRA Issue | HADOOP-19286 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 84b5c655529c 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f5f0bc519597e43209ef64f4dea45f2b1e8ea837 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7067/2/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7067/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Support S3A cross region access when S3 region/en

[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884512#comment-17884512
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

shameersss1 commented on code in PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#discussion_r1774589288


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -346,6 +347,37 @@ public void 
testCentralEndpointAndDifferentRegionThanBucket() throws Throwable {
 assertRequesterPaysFileExistence(newConf);
   }
 
+  @Test
+  public void testWithOutCrossRegionAccess() throws Exception {
+describe("Verify cross region access fails when disabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// skip the test if the region is eu-west-2
+String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
+if (EU_WEST_2.equals(region)) {

Review Comment:
   1. Ack.
   2. As per the doc 
(https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/s3-cross-region.html)
   ```
   When you reference an existing bucket in a request, such as when you use the 
putObject method, the SDK initiates a request to the Region configured for the 
client.
   
   If the bucket does not exist in that specific Region, the error response 
includes the actual Region where the bucket resides. The SDK then uses the 
correct Region in a second request.
   
   To optimize future requests to the same bucket, the SDK caches this Region 
mapping in the client.
   ```
   
   So as per the implementation, It looks like it won't be supported by 
thirdparty store and each store have to implement it separately.





> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enabled.
> The proposal here is to carves out cross region access as a separate config 
> and enable/disable it irrespective of region/endpoint is set. This gives more 
> flexibility to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884508#comment-17884508
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

shameersss1 commented on code in PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#discussion_r1774548017


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -346,6 +347,37 @@ public void 
testCentralEndpointAndDifferentRegionThanBucket() throws Throwable {
 assertRequesterPaysFileExistence(newConf);
   }
 
+  @Test
+  public void testWithOutCrossRegionAccess() throws Exception {
+describe("Verify cross region access fails when disabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// skip the test if the region is eu-west-2
+String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
+if (EU_WEST_2.equals(region)) {
+  return;
+}
+// disable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, false);
+newConf.set(AWS_REGION, EU_WEST_2);
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), newConf);
+intercept(AWSRedirectException.class,
+"does not match the AWS region containing the bucket",
+() -> fs.exists(getFileSystem().getWorkingDirectory()));
+  }
+
+  @Test
+  public void testWithCrossRegionAccess() throws Exception {
+describe("Verify cross region access succeed when enabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// enable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, true);
+newConf.set(AWS_REGION, EU_WEST_2);

Review Comment:
   ack





> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enabled.
> The proposal here is to carves out cross region access as a separate config 
> and enable/disable it irrespective of region/endpoint is set. This gives more 
> flexibility to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884507#comment-17884507
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

shameersss1 commented on code in PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#discussion_r1774544254


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -346,6 +347,37 @@ public void 
testCentralEndpointAndDifferentRegionThanBucket() throws Throwable {
 assertRequesterPaysFileExistence(newConf);
   }
 
+  @Test
+  public void testWithOutCrossRegionAccess() throws Exception {
+describe("Verify cross region access fails when disabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// skip the test if the region is eu-west-2
+String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
+if (EU_WEST_2.equals(region)) {
+  return;
+}
+// disable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, false);
+newConf.set(AWS_REGION, EU_WEST_2);
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), newConf);
+intercept(AWSRedirectException.class,
+"does not match the AWS region containing the bucket",
+() -> fs.exists(getFileSystem().getWorkingDirectory()));
+  }
+
+  @Test
+  public void testWithCrossRegionAccess() throws Exception {
+describe("Verify cross region access succeed when enabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// enable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, true);
+newConf.set(AWS_REGION, EU_WEST_2);
+S3AFileSystem fs = new S3AFileSystem();

Review Comment:
   ack





> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enabled.
> The proposal here is to carves out cross region access as a separate config 
> and enable/disable it irrespective of region/endpoint is set. This gives more 
> flexibility to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15760) Upgrade commons-collections to commons-collections4

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884372#comment-17884372
 ] 

ASF GitHub Bot commented on HADOOP-15760:
-

NihalJain commented on PR #7006:
URL: https://github.com/apache/hadoop/pull/7006#issuecomment-2371856848

   > +1. lets get into trunk and see if anyone complains.
   
   Thank you @steveloughran for merging into trunk.
   
   > Can you do a PR for branch-3.4 -we can merge it if yetus is happy
   
   Sure let me put up a PR for same. 
   
   
   




> Upgrade commons-collections to commons-collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0, 3.0.3
>Reporter: David Mollitor
>Assignee: Nihal Jain
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
> Attachments: HADOOP-15760.1.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Please allow for use of Apache Commons Collections 4 library with the end 
> goal of migrating from Apache Commons Collections 3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884369#comment-17884369
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

hadoop-yetus commented on PR #6884:
URL: https://github.com/apache/hadoop/pull/6884#issuecomment-2371845535

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 17 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  16m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   4m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +0 :ok: |  spotbugs  |   0m 45s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  36m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 39s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |  17m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |  16m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 21s |  |  root: The patch generated 
0 new + 33 unchanged - 1 fixed = 33 total (was 34)  |
   | +1 :green_heart: |  mvnsite  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  35m 36s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 39s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  19m 41s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 59s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 255m 33s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/15/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux d2ab10820633 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 32169e07b7122907e9b4f78912ddc17161e8ed87 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.

[jira] [Updated] (HADOOP-19288) hadoop-client-runtime exclude dnsjava InetAddressResolverProvider

2024-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19288:

Labels: pull-request-available  (was: )

> hadoop-client-runtime exclude dnsjava InetAddressResolverProvider
> -
>
> Key: HADOOP-19288
> URL: https://issues.apache.org/jira/browse/HADOOP-19288
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Major
>  Labels: pull-request-available
>
> [https://github.com/dnsjava/dnsjava/issues/338]
>  
> {code:java}
> Exception in thread "main" java.util.ServiceConfigurationError: 
> java.net.spi.InetAddressResolverProvider: Provider 
> org.apache.hadoop.shaded.org.xbill.DNS.spi.DnsjavaInetAddressResolverProvider 
> not found
>     at java.base/java.util.ServiceLoader.fail(ServiceLoader.java:593)
>     at 
> java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.nextProviderClass(ServiceLoader.java:1219)
>     at 
> java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNextService(ServiceLoader.java:1228)
>     at 
> java.base/java.util.ServiceLoader$LazyClassPathLookupIterator.hasNext(ServiceLoader.java:1273)
>     at java.base/java.util.ServiceLoader$2.hasNext(ServiceLoader.java:1309)
>     at java.base/java.util.ServiceLoader$3.hasNext(ServiceLoader.java:1393)
>     at java.base/java.util.ServiceLoader.findFirst(ServiceLoader.java:1812)
>     at java.base/java.net.InetAddress.loadResolver(InetAddress.java:508)
>     at java.base/java.net.InetAddress.resolver(InetAddress.java:488)
>     at 
> java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1826)
>     at 
> java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:1139)
>     at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1818)
>     at java.base/java.net.InetAddress.getLocalHost(InetAddress.java:1931)
>     at 
> org.apache.logging.log4j.core.util.NetUtils.getLocalHostname(NetUtils.java:56)
>     at 
> org.apache.logging.log4j.core.LoggerContext.lambda$setConfiguration$0(LoggerContext.java:625)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19288) hadoop-client-runtime exclude dnsjava InetAddressResolverProvider

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884366#comment-17884366
 ] 

ASF GitHub Bot commented on HADOOP-19288:
-

hadoop-yetus commented on PR #7070:
URL: https://github.com/apache/hadoop/pull/7070#issuecomment-2371838528

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  shadedclient  |  92m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   7m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  shadedclient  |  40m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 18s |  |  hadoop-client-runtime in the 
patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 34s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7070/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7070 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux 702579d672a0 5.15.0-119-generic #129-Ubuntu SMP Fri Aug 2 
19:25:20 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3cedab0dcfa6c6d823c3252f122639f0396b68bd |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7070/1/testReport/ |
   | Max. process+thread count | 528 (vs. ulimit of 5500) |
   | modules | C: hadoop-client-modules/hadoop-client-runtime U: 
hadoop-client-modules/hadoop-client-runtime |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7070/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> hadoop-client-runtime exclude dnsjava InetAddressResolverProvider
> -
>
>  

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884360#comment-17884360
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anujmodi2021 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1773692284


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java:
##
@@ -271,4 +275,49 @@ private void 
ensureGetAclDetermineHnsStatusAccuratelyInternal(int statusCode,
 Mockito.verify(mockClient, times(1))
 .getAclStatus(anyString(), any(TracingContext.class));
   }
+
+  @Test
+  public void testAccountSpecificConfig() throws Exception {
+Configuration rawConfig = new Configuration();
+rawConfig.addResource(TEST_CONFIGURATION_FILE_NAME);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED,
+this.getAccountName()));
+String accountName1 = "account1.dfs.core.windows.net";
+String accountName2 = "account2.dfs.core.windows.net";
+String accountName3 = "account3.dfs.core.windows.net";
+String defaultUri1 = this.getTestUrl().replace(this.getAccountName(), 
accountName1);
+String defaultUri2 = this.getTestUrl().replace(this.getAccountName(), 
accountName2);
+String defaultUri3 = this.getTestUrl().replace(this.getAccountName(), 
accountName3);
+
+// Set both account specific and account agnostic config for account 1
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName1), FALSE_STR);
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, TRUE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri1);
+AzureBlobFileSystem fs1 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account specific config takes precedence
+Assertions.assertThat(getIsNamespaceEnabled(fs1)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+
+// Set only the account specific config for account 2
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName2), FALSE_STR);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri2);
+AzureBlobFileSystem fs2 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account specific config is enough.
+Assertions.assertThat(getIsNamespaceEnabled(fs2)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+
+// Set only account agnostic config for account 3
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, FALSE_STR);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName3));
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri3);
+AzureBlobFileSystem fs3 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+// Assert that account agnostic config is enough.
+Assertions.assertThat(getIsNamespaceEnabled(fs3)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is not set").isFalse();
+  }

Review Comment:
   Self-note: I also need to close these file systems created here.





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19243) Upgrade Mockito version to 4.11.0

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884345#comment-17884345
 ] 

ASF GitHub Bot commented on HADOOP-19243:
-

sadanand48 commented on code in PR #6968:
URL: https://github.com/apache/hadoop/pull/6968#discussion_r1773635798


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:
##
@@ -253,9 +253,8 @@ static long toLong(long preferredBlockSize, long 
layoutRedundancy,
 
   private BlockInfo[] blocks;
 
-  INodeFile(long id, byte[] name, PermissionStatus permissions, long mtime,
-long atime, BlockInfo[] blklist, short replication,
-long preferredBlockSize) {
+  public INodeFile(long id, byte[] name, PermissionStatus permissions, long 
mtime, long atime,

Review Comment:
   This was done to fix the failing test at 
[TestFileWithSnapshotFeature.java](https://github.com/apache/hadoop/pull/6968/files#diff-1dc61f49252bdd57529e44092586fe9adaba7b1f3f9b8c55e88b21b3ff2aa63d)
 . The test was failing with the mockito-upgrade and in order to fix this, we 
made the change.
   There are a lot of differences b/w older mockito and newer and also b/w 
mockito-core and inline which we have used in  multiple places. So far we have 
iteratively worked on this PR to check what works and almost reached a single 
digit in terms of the failing unit tests for the mockito upgrade required to 
support JDK 17 runtime.
   Some differences I remember b/w mockito versions are : in older mockito 
versions/mockito-core , it is allowed to call a real method although in 
mockito-inline and later versions, every method call must be strictly stubbed 
and define doReturn/doAnswer etc. 





> Upgrade Mockito version to 4.11.0
> -
>
> Key: HADOOP-19243
> URL: https://issues.apache.org/jira/browse/HADOOP-19243
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Muskan Mishra
>Assignee: Muskan Mishra
>Priority: Major
>  Labels: pull-request-available
>
> While Compiling test classes with JDK17, faced error related to Mockito:
> *Mockito cannot mock this class.*
> So to make it compatible with jdk17 we have to upgrade the version of 
> mockito-core as well as mockito-inline.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15760) Include Apache Commons Collections4

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884341#comment-17884341
 ] 

ASF GitHub Bot commented on HADOOP-15760:
-

steveloughran merged PR #7006:
URL: https://github.com/apache/hadoop/pull/7006




> Include Apache Commons Collections4
> ---
>
> Key: HADOOP-15760
> URL: https://issues.apache.org/jira/browse/HADOOP-15760
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.10.0, 3.0.3
>Reporter: David Mollitor
>Assignee: David Mollitor
>Priority: Major
>  Labels: pull-request-available
> Attachments: HADOOP-15760.1.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Please allow for use of Apache Commons Collections 4 library with the end 
> goal of migrating from Apache Commons Collections 3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19243) Upgrade Mockito version to 4.11.0

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884331#comment-17884331
 ] 

ASF GitHub Bot commented on HADOOP-19243:
-

steveloughran commented on code in PR #6968:
URL: https://github.com/apache/hadoop/pull/6968#discussion_r1773597712


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/src/test/java/org/apache/hadoop/yarn/server/timelineservice/documentstore/TestDocumentStoreTimelineWriterImpl.java:
##
@@ -54,10 +58,17 @@ public void setUp() throws YarnException {
 "https://localhost:443";);
 conf.set(DocumentStoreUtils.TIMELINE_SERVICE_COSMOSDB_MASTER_KEY,
 "1234567");
-PowerMockito.mockStatic(DocumentStoreFactory.class);
-PowerMockito.when(DocumentStoreFactory.createDocumentStoreWriter(
-ArgumentMatchers.any(Configuration.class)))
-.thenReturn(documentStoreWriter);
+mockedFactory = Mockito.mockStatic(DocumentStoreFactory.class);
+mockedFactory.when(() -> DocumentStoreFactory.createDocumentStoreWriter(
+ArgumentMatchers.any(Configuration.class)))
+.thenReturn(documentStoreWriter);
+  }
+
+  @After
+  public void tearDown() {
+if(mockedFactory != null) {

Review Comment:
   nit: add a space



##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-documentstore/src/test/java/org/apache/hadoop/yarn/server/timelineservice/documentstore/writer/cosmosdb/TestCosmosDBDocumentStoreWriter.java:
##
@@ -28,14 +28,16 @@
 import org.junit.Before;
 import org.junit.Test;
 import org.junit.runner.RunWith;
-import org.mockito.ArgumentMatchers;
 import org.mockito.Mockito;
-import org.powermock.api.mockito.PowerMockito;
+

Review Comment:
   no need to add a space here



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeFile.java:
##
@@ -253,9 +253,8 @@ static long toLong(long preferredBlockSize, long 
layoutRedundancy,
 
   private BlockInfo[] blocks;
 
-  INodeFile(long id, byte[] name, PermissionStatus permissions, long mtime,
-long atime, BlockInfo[] blklist, short replication,
-long preferredBlockSize) {
+  public INodeFile(long id, byte[] name, PermissionStatus permissions, long 
mtime, long atime,

Review Comment:
   why this change?



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestCacheDirectives.java:
##
@@ -1575,7 +1575,7 @@ public void testNoLookupsWhenNotUsed() throws Exception {
 CacheManager cm = cluster.getNamesystem().getCacheManager();
 LocatedBlocks locations = Mockito.mock(LocatedBlocks.class);
 cm.setCachedLocations(locations);
-Mockito.verifyZeroInteractions(locations);

Review Comment:
   still hoping to see this to assist backports





> Upgrade Mockito version to 4.11.0
> -
>
> Key: HADOOP-19243
> URL: https://issues.apache.org/jira/browse/HADOOP-19243
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Muskan Mishra
>Assignee: Muskan Mishra
>Priority: Major
>  Labels: pull-request-available
>
> While Compiling test classes with JDK17, faced error related to Mockito:
> *Mockito cannot mock this class.*
> So to make it compatible with jdk17 we have to upgrade the version of 
> mockito-core as well as mockito-inline.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19165) Explore dropping protobuf 2.5.0 from the distro

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884328#comment-17884328
 ] 

ASF GitHub Bot commented on HADOOP-19165:
-

ayushtkn merged PR #7051:
URL: https://github.com/apache/hadoop/pull/7051




> Explore dropping protobuf 2.5.0 from the distro
> ---
>
> Key: HADOOP-19165
> URL: https://issues.apache.org/jira/browse/HADOOP-19165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> explore if protobuf-2.5.0 can be dropped from distro, it is a transitive 
> dependency from HBase, but HBase doesn't use it in the code.
> Check if it is the only one pulling it into the distro & will something break 
> if we exclude that, if none lets get rid of it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884325#comment-17884325
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

steveloughran commented on code in PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#discussion_r1773578299


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -346,6 +347,37 @@ public void 
testCentralEndpointAndDifferentRegionThanBucket() throws Throwable {
 assertRequesterPaysFileExistence(newConf);
   }
 
+  @Test
+  public void testWithOutCrossRegionAccess() throws Exception {
+describe("Verify cross region access fails when disabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// skip the test if the region is eu-west-2
+String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
+if (EU_WEST_2.equals(region)) {
+  return;
+}
+// disable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, false);
+newConf.set(AWS_REGION, EU_WEST_2);
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), newConf);
+intercept(AWSRedirectException.class,
+"does not match the AWS region containing the bucket",
+() -> fs.exists(getFileSystem().getWorkingDirectory()));
+  }
+
+  @Test
+  public void testWithCrossRegionAccess() throws Exception {
+describe("Verify cross region access succeed when enabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// enable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, true);
+newConf.set(AWS_REGION, EU_WEST_2);

Review Comment:
   same comments as above



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -346,6 +347,37 @@ public void 
testCentralEndpointAndDifferentRegionThanBucket() throws Throwable {
 assertRequesterPaysFileExistence(newConf);
   }
 
+  @Test
+  public void testWithOutCrossRegionAccess() throws Exception {
+describe("Verify cross region access fails when disabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// skip the test if the region is eu-west-2
+String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
+if (EU_WEST_2.equals(region)) {

Review Comment:
   1. I'd like a different region here a that is my region and I would like the 
test coverage.
   2. what happens with third party stores?



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEndpointRegion.java:
##
@@ -346,6 +347,37 @@ public void 
testCentralEndpointAndDifferentRegionThanBucket() throws Throwable {
 assertRequesterPaysFileExistence(newConf);
   }
 
+  @Test
+  public void testWithOutCrossRegionAccess() throws Exception {
+describe("Verify cross region access fails when disabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// skip the test if the region is eu-west-2
+String region = 
getFileSystem().getS3AInternals().getBucketMetadata().bucketRegion();
+if (EU_WEST_2.equals(region)) {
+  return;
+}
+// disable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, false);
+newConf.set(AWS_REGION, EU_WEST_2);
+S3AFileSystem fs = new S3AFileSystem();
+fs.initialize(getFileSystem().getUri(), newConf);
+intercept(AWSRedirectException.class,
+"does not match the AWS region containing the bucket",
+() -> fs.exists(getFileSystem().getWorkingDirectory()));
+  }
+
+  @Test
+  public void testWithCrossRegionAccess() throws Exception {
+describe("Verify cross region access succeed when enabled");
+final Configuration newConf = new Configuration(getConfiguration());
+// enable cross region access
+newConf.setBoolean(AWS_S3_CROSS_REGION_ACCESS_ENABLED, true);
+newConf.set(AWS_REGION, EU_WEST_2);
+S3AFileSystem fs = new S3AFileSystem();

Review Comment:
   needs to be in try-with-resources to close()





> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enable

[jira] [Commented] (HADOOP-19287) Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884273#comment-17884273
 ] 

ASF GitHub Bot commented on HADOOP-19287:
-

cxzl25 commented on code in PR #7068:
URL: https://github.com/apache/hadoop/pull/7068#discussion_r1773324396


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java:
##
@@ -178,7 +178,7 @@ public static class KMSTokenRenewer extends TokenRenewer {
 
 @Override
 public boolean handleKind(Text kind) {
-  return kind.equals(TOKEN_KIND);
+  return kind.equals(KMSDelegationToken.TOKEN_KIND);

Review Comment:
   Maybe HADOOP-19152 fixed this problem, before HADOOP-19152, 
`KMSClientProvider` inherited KeyProvider`, `KeyProvider` imported 
`org.bouncycastle.jce.provider.BouncyCastleProvider`, but 
`hadoop-client-runtime` did not package bcprov.
   
   My change was to avoid initializing `KMSClientProvider`.
   
   Do we need to backport HADOOP-19152 for 3.3 and 3.4?





> Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class
> ---
>
> Key: HADOOP-19287
> URL: https://issues.apache.org/jira/browse/HADOOP-19287
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Major
>  Labels: pull-request-available
>
>  
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/bouncycastle/jce/provider/BouncyCastleProvider
>     at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.handleKind(KMSClientProvider.java:180)
>     at org.apache.hadoop.security.token.Token.getRenewer(Token.java:467)
>     at org.apache.hadoop.security.token.Token.renew(Token.java:500)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$3(HadoopFSDelegationTokenProvider.scala:147)
>     at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
>     at scala.util.Try$.apply(Try.scala:217)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$2(HadoopFSDelegationTokenProvider.scala:146)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19287) Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884269#comment-17884269
 ] 

ASF GitHub Bot commented on HADOOP-19287:
-

ayushtkn commented on code in PR #7068:
URL: https://github.com/apache/hadoop/pull/7068#discussion_r1773302719


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java:
##
@@ -178,7 +178,7 @@ public static class KMSTokenRenewer extends TokenRenewer {
 
 @Override
 public boolean handleKind(Text kind) {
-  return kind.equals(TOKEN_KIND);
+  return kind.equals(KMSDelegationToken.TOKEN_KIND);

Review Comment:
   ``TOKEN_KIND`` is storing the same value in the same class how things change 
by not using the variable
   ```
 public static final Text TOKEN_KIND = KMSDelegationToken.TOKEN_KIND;
   ```





> Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class
> ---
>
> Key: HADOOP-19287
> URL: https://issues.apache.org/jira/browse/HADOOP-19287
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Major
>  Labels: pull-request-available
>
>  
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/bouncycastle/jce/provider/BouncyCastleProvider
>     at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.handleKind(KMSClientProvider.java:180)
>     at org.apache.hadoop.security.token.Token.getRenewer(Token.java:467)
>     at org.apache.hadoop.security.token.Token.renew(Token.java:500)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$3(HadoopFSDelegationTokenProvider.scala:147)
>     at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
>     at scala.util.Try$.apply(Try.scala:217)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$2(HadoopFSDelegationTokenProvider.scala:146)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19287) Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884222#comment-17884222
 ] 

ASF GitHub Bot commented on HADOOP-19287:
-

hadoop-yetus commented on PR #7068:
URL: https://github.com/apache/hadoop/pull/7068#issuecomment-2370932894

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   8m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   8m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   8m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 37s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 144m  4s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7068/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7068 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 008e076953ad 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d32491c087cd5fb2cefcd451994bdba9b5f29085 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7068/1/testReport/ |
   | Max. process+thread count | 1273 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7068/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.

[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884217#comment-17884217
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

hadoop-yetus commented on PR #6884:
URL: https://github.com/apache/hadoop/pull/6884#issuecomment-2370900316

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 17 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m  3s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   2m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +0 :ok: |  spotbugs  |   0m 45s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  35m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 38s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |  17m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |  16m 24s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/14/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 21s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/14/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 33 unchanged - 1 fixed = 34 total (was 
34)  |
   | +1 :green_heart: |  mvnsite  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +0 :ok: |  spotbugs  |   0m 40s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  35m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  19m 42s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 58s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 254m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6884/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6884 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux 11f94ee58154 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personal

[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884210#comment-17884210
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

shameersss1 commented on PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#issuecomment-2370868022

   @ahmarsuhail @steveloughran  Could you please review the changes ?




> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enabled.
> The proposal here is to carves out cross region access as a separate config 
> and enable/disable it irrespective of region/endpoint is set. This gives more 
> flexibility to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884197#comment-17884197
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

hadoop-yetus commented on PR #7067:
URL: https://github.com/apache/hadoop/pull/7067#issuecomment-2370800799

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 50s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 129m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7067/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7067 |
   | JIRA Issue | HADOOP-19286 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux bcb963e8e9da 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1c2e59c6423808c1da084d08f50128a85556cb7f |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7067/1/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7067/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Support S3A cross region access when S3 region/en

[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884189#comment-17884189
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

slfan1989 commented on PR #7019:
URL: https://github.com/apache/hadoop/pull/7019#issuecomment-2370738063

   @ayushtkn @aajisaka @virajjasani @steveloughran 
   
   I think we can start reviewing this PR now. Most issues have been resolved, 
but there are indeed some problems that I will continue to address. However, I 
would like to hear your thoughts on this pr.




> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Assignee: Shilun Fan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19279) ABFS: Disabling Apache Http Client as Default Http Client for ABFS Driver

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884185#comment-17884185
 ] 

ASF GitHub Bot commented on HADOOP-19279:
-

manika137 commented on PR #7055:
URL: https://github.com/apache/hadoop/pull/7055#issuecomment-237061

   > Hey @manika137 I don't see any update on the jira. could you please point 
out what have you updated thanks
   > 
   > btw I have updated the release notes here 
https://issues.apache.org/jira/browse/HADOOP-19120 saying this is just an 
addition. default is still old JDK HTTP client.
   
   Hey @mukund-thakur, sorry, I updated the other release note. Thanks for 
updating it!
   Should I add details for the default change as well here 
https://issues.apache.org/jira/browse/HADOOP-19120?




> ABFS: Disabling Apache Http Client as Default Http Client for ABFS Driver
> -
>
> Key: HADOOP-19279
> URL: https://issues.apache.org/jira/browse/HADOOP-19279
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.4.1
>Reporter: Manika Joshi
>Assignee: Manika Joshi
>Priority: Minor
>  Labels: pull-request-available
>
> As part of work done under HADOOP-19120 [ABFS]: ApacheHttpClient adaptation 
> as network library - ASF JIRA
> Apache Http Client was introduced as an alternative Network Library that can 
> be used with ABFS Driver. Earlier JDK Http Client was the only supported 
> network library.
> Apache Http Client was found to be more helpful in terms of controls and 
> knobs it provides to better manage the Network aspects of driver. Hence it 
> was made the default Network Client to be used with ABFS Driver.
> Recently while running scale workloads, we observed a regression where some 
> unexpected wait time was observed while establishing connections. A possible 
> fix has been identified and we are working on getting it fixed.
> One scenario identified during internal tests revealed that, with the current 
> Apache client implementation, a connection cannot become stale. As a result, 
> it is safe to remove the check for stale connections when closing them. This 
> change will optimize the connection handling process by eliminating 
> unnecessary delays and reducing the risk of potential failures caused by 
> redundant checks.
> There was also a possible NPE scenario which was identified on the new 
> network client code recently.
> Until we are done with the code fixes and have revalidated the whole Apache 
> client flow, we would like to make JDK Client as default client again. The 
> new support for Apache Http Client will still be there, but it will be 
> disabled behind a config.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19287) Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class

2024-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19287:

Labels: pull-request-available  (was: )

> Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class
> ---
>
> Key: HADOOP-19287
> URL: https://issues.apache.org/jira/browse/HADOOP-19287
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Major
>  Labels: pull-request-available
>
>  
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/bouncycastle/jce/provider/BouncyCastleProvider
>     at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.handleKind(KMSClientProvider.java:180)
>     at org.apache.hadoop.security.token.Token.getRenewer(Token.java:467)
>     at org.apache.hadoop.security.token.Token.renew(Token.java:500)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$3(HadoopFSDelegationTokenProvider.scala:147)
>     at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
>     at scala.util.Try$.apply(Try.scala:217)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$2(HadoopFSDelegationTokenProvider.scala:146)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19287) Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884183#comment-17884183
 ] 

ASF GitHub Bot commented on HADOOP-19287:
-

cxzl25 opened a new pull request, #7068:
URL: https://github.com/apache/hadoop/pull/7068

   ### Description of PR
   
   `KMSTokenRenewer` uses `KMSClientProvider#TOKEN_KIND` as a judgment. 
`KMSClientProvider` requires class `BouncyCastleProvider` to load, which leads 
to `ClassNotFoundException: org.bouncycastle.jce.provider.BouncyCastleProvider`.
   
   ```java
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/bouncycastle/jce/provider/BouncyCastleProvider
       at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.handleKind(KMSClientProvider.java:180)
       at org.apache.hadoop.security.token.Token.getRenewer(Token.java:467)
       at org.apache.hadoop.security.token.Token.renew(Token.java:500)
       at 
org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$3(HadoopFSDelegationTokenProvider.scala:147)
       at 
scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
       at scala.util.Try$.apply(Try.scala:217)
       at 
org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$2(HadoopFSDelegationTokenProvider.scala:146)
 
   ```
   
   ### How was this patch tested?
   Production environment verification
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Fix KMSTokenRenewer#handleKind dependency on BouncyCastleProvider class
> ---
>
> Key: HADOOP-19287
> URL: https://issues.apache.org/jira/browse/HADOOP-19287
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: dzcxzl
>Priority: Major
>
>  
> {code:java}
> Exception in thread "main" java.lang.NoClassDefFoundError: 
> org/bouncycastle/jce/provider/BouncyCastleProvider
>     at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider$KMSTokenRenewer.handleKind(KMSClientProvider.java:180)
>     at org.apache.hadoop.security.token.Token.getRenewer(Token.java:467)
>     at org.apache.hadoop.security.token.Token.renew(Token.java:500)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$3(HadoopFSDelegationTokenProvider.scala:147)
>     at scala.runtime.java8.JFunction0$mcJ$sp.apply(JFunction0$mcJ$sp.scala:17)
>     at scala.util.Try$.apply(Try.scala:217)
>     at 
> org.apache.spark.deploy.security.HadoopFSDelegationTokenProvider.$anonfun$getTokenRenewalInterval$2(HadoopFSDelegationTokenProvider.scala:146)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19286:

Labels: pull-request-available  (was: )

> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enabled.
> The proposal here is to carves out cross region access as a separate config 
> and enable/disable it irrespective of region/endpoint is set. This gives more 
> flexibility to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19286) Support S3A cross region access when S3 region/endpoint is set

2024-09-24 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884148#comment-17884148
 ] 

ASF GitHub Bot commented on HADOOP-19286:
-

shameersss1 opened a new pull request, #7067:
URL: https://github.com/apache/hadoop/pull/7067

   Currently when S3 region nor endpoint is set, the default region is set to 
us-east-2 with cross region access enabled. But when region or endpoint is set, 
cross region access is not enabled.
   
   The proposal here is to carves out cross region access as a separate config 
and enable/disable it irrespective of region/endpoint is set. This gives more 
flexibility to the user.
   
   S3 cross region access can be enabled/disabled via config 
`fs.s3a.cross.region.access.enabled` which is set to true by default.
   




> Support S3A cross region access when S3 region/endpoint is set
> --
>
> Key: HADOOP-19286
> URL: https://issues.apache.org/jira/browse/HADOOP-19286
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Syed Shameerur Rahman
>Assignee: Syed Shameerur Rahman
>Priority: Major
>
> Currently when S3 region nor endpoint is set, the default region is set to 
> us-east-2 with cross region access enabled. But when region or endpoint is 
> set, cross region access is not enabled.
> The proposal here is to carves out cross region access as a separate config 
> and enable/disable it irrespective of region/endpoint is set. This gives more 
> flexibility to the user.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19261) Support force close a DomainSocket for server service

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884127#comment-17884127
 ] 

ASF GitHub Bot commented on HADOOP-19261:
-

hadoop-yetus commented on PR #7057:
URL: https://github.com/apache/hadoop/pull/7057#issuecomment-2370326652

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  55m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  16m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  36m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |  16m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |  16m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 235m 36s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7057/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7057 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c5d92dbeae50 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f451eca1ecbf8e8b18021cb833b695475c9966c8 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7057/2/testReport/ |
   | Max. process+thread count | 1252 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7057/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Support force close a DomainSocket for server service

[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884124#comment-17884124
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

shameersss1 commented on PR #6884:
URL: https://github.com/apache/hadoop/pull/6884#issuecomment-2370295444

   @steveloughran  - I have resolved the merge conflict and force pushed.




> AWS SDK V2 - Implement CSE
> --
>
> Key: HADOOP-18708
> URL: https://issues.apache.org/jira/browse/HADOOP-18708
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Syed Shameerur Rahman
>Priority: Major
>  Labels: pull-request-available
>
> S3 Encryption client for SDK V2 is now available, so add client side 
> encryption back in. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19165) Explore dropping protobuf 2.5.0 from the distro

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884112#comment-17884112
 ] 

ASF GitHub Bot commented on HADOOP-19165:
-

pan3793 commented on PR #7051:
URL: https://github.com/apache/hadoop/pull/7051#issuecomment-2370199206

   According to HBASE-27436 and HBASE-27436, protobuf 2.5 can be purged if 
Hadoop does not use the HBase co-processor feature.




> Explore dropping protobuf 2.5.0 from the distro
> ---
>
> Key: HADOOP-19165
> URL: https://issues.apache.org/jira/browse/HADOOP-19165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
>
> explore if protobuf-2.5.0 can be dropped from distro, it is a transitive 
> dependency from HBase, but HBase doesn't use it in the code.
> Check if it is the only one pulling it into the distro & will something break 
> if we exclude that, if none lets get rid of it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17884060#comment-17884060
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

hadoop-yetus commented on PR #7019:
URL: https://github.com/apache/hadoop/pull/7019#issuecomment-2369627388

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  6s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  6s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  6s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shelldocs  |   0m  6s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  jsonlint  |   0m  0s |  |  jsonlint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  pathlen  |   0m  0s | 
[/results-pathlen.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/18/artifact/out/results-pathlen.txt)
 |  The patch appears to contain 1 files with names longer than 240  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 115 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 38s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |  16m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   4m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  35m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  29m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | -1 :x: |  javadoc  |   0m 17s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/18/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice-hbase_hadoop-yarn-server-timelineservice-hbase-server_hadoop-yarn-server-timelineservice-hbase-server-2-jdkPrivateBuild-1.8.0_422-8u422-b05-1~20.04-b05.txt)
 |  hadoop-yarn-server-timelineservice-hbase-server-2 in trunk failed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05.  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   0m 45s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/18/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |  11m 12s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/18/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn-warnings.html)
 |  hadoop-yarn-project/hadoop-yarn in trunk has 1 extant spotbugs warnings.  |
   | -1 :x: |  spotbugs  |   1m  1s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7019/18/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 29s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  
branch/hadoop-client-modules/hadoop-client-integrati

[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883980#comment-17883980
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

steveloughran commented on code in PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#discussion_r1771774375


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1390,6 +1390,13 @@ private Constants() {
*/
   public static final String FS_S3A_CREATE_PERFORMANCE = 
"fs.s3a.create.performance";
 
+  /**
+   * Flag for commit if none match.
+   * This can be set in the {code createFile()} builder.
+   * Value {@value}.
+   */
+  public static final String FS_S3A_CREATE_IF_NONE_MATCH = 
"fs.s3a.create.header.If-None-Match";

Review Comment:
   I'm going to propose
   ```
   fs.s3a.conditional.file.create
   ```
   
   this is to
   * allow it to be set in a hadoop/spark configuration
   * line up for `fs.s3a.conditional.file.rename`
   * hide the actual implementation details.
   
   Please change the option and field names.
   





> S3A: Support S3 Conditional Writes
> --
>
> Key: HADOOP-19256
> URL: https://issues.apache.org/jira/browse/HADOOP-19256
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Conditional Write (Put-if-absent) capability is now generally available - 
> [https://aws.amazon.com/about-aws/whats-new/2024/08/amazon-s3-conditional-writes/]
>  
> S3A should allow passing in this put-if-absent header to prevent over writing 
> of files. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19256) S3A: Support S3 Conditional Writes

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883957#comment-17883957
 ] 

ASF GitHub Bot commented on HADOOP-19256:
-

steveloughran commented on code in PR #7011:
URL: https://github.com/apache/hadoop/pull/7011#discussion_r1769075759


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestS3APutIfMatch.java:
##
@@ -0,0 +1,113 @@
+package org.apache.hadoop.fs.s3a.impl;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataOutputStreamBuilder;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.RemoteFileChangedException;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+import org.junit.Assert;
+import org.junit.Test;
+import software.amazon.awssdk.services.s3.model.S3Exception;
+
+import java.io.IOException;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.s3a.Constants.FAST_UPLOAD_BUFFER;
+import static org.apache.hadoop.fs.s3a.Constants.FAST_UPLOAD_BUFFER_ARRAY;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_IF_NONE_MATCH;
+import static org.apache.hadoop.fs.s3a.Constants.MIN_MULTIPART_THRESHOLD;
+import static org.apache.hadoop.fs.s3a.Constants.MULTIPART_MIN_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.MULTIPART_SIZE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static 
org.apache.hadoop.fs.s3a.impl.InternalConstants.UPLOAD_PART_COUNT_LIMIT;
+import static 
org.apache.hadoop.fs.s3a.scale.ITestS3AMultipartUploadSizeLimits.MPU_SIZE;
+import static org.apache.hadoop.fs.s3a.scale.S3AScaleTestBase._1MB;
+
+
+public class ITestS3APutIfMatch extends AbstractS3ATestBase {

Review Comment:
   should subclass AbstractS3ACostTest so once create(overwrite=false) is done 
we can assert that no HEAD was not issued in createFile,
   Override setup to check the if-none-match flag to skip on third party store 
tests; use skipIfNotEnabled()



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/impl/ITestS3APutIfMatch.java:
##
@@ -0,0 +1,113 @@
+package org.apache.hadoop.fs.s3a.impl;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FSDataOutputStreamBuilder;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.RemoteFileChangedException;
+import org.apache.hadoop.fs.s3a.S3ATestUtils;
+import org.apache.hadoop.io.IOUtils;
+
+import org.junit.Assert;
+import org.junit.Test;
+import software.amazon.awssdk.services.s3.model.S3Exception;
+
+import java.io.IOException;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
+import static org.apache.hadoop.fs.s3a.Constants.FAST_UPLOAD_BUFFER;
+import static org.apache.hadoop.fs.s3a.Constants.FAST_UPLOAD_BUFFER_ARRAY;
+import static org.apache.hadoop.fs.s3a.Constants.FS_S3A_CREATE_IF_NONE_MATCH;
+import static org.apache.hadoop.fs.s3a.Constants.MIN_MULTIPART_THRESHOLD;
+import static org.apache.hadoop.fs.s3a.Constants.MULTIPART_MIN_SIZE;
+import static org.apache.hadoop.fs.s3a.Constants.MULTIPART_SIZE;
+import static 
org.apache.hadoop.fs.s3a.S3ATestUtils.removeBaseAndBucketOverrides;
+import static 
org.apache.hadoop.fs.s3a.impl.InternalConstants.UPLOAD_PART_COUNT_LIMIT;
+import static 
org.apache.hadoop.fs.s3a.scale.ITestS3AMultipartUploadSizeLimits.MPU_SIZE;
+import static org.apache.hadoop.fs.s3a.scale.S3AScaleTestBase._1MB;
+
+
+public class ITestS3APutIfMatch extends AbstractS3ATestBase {
+
+@Override
+protected Configuration createConfiguration() {
+Configuration conf = super.createConfiguration();
+S3ATestUtils.disableFilesystemCaching(conf);
+removeBaseAndBucketOverrides(conf,
+MULTIPART_SIZE,
+UPLOAD_PART_COUNT_LIMIT);
+conf.setLong(MULTIPART_SIZE, MPU_SIZE);
+conf.setLong(UPLOAD_PART_COUNT_LIMIT, 2);
+conf.setLong(MIN_MULTIPART_THRESHOLD, MULTIPART_MIN_SIZE);
+conf.setInt(MULTIPART_SIZE, MULTIPART_MIN_SIZE);
+conf.set(FAST_UPLOAD_BUFFER, getBlockOutputBufferName());
+return conf;
+}
+
+protected String getBlockOutputBufferName() {
+return FAST_UPLOAD_BUFFER_ARRAY;
+}
+
+/**
+ * Create a file using the PutIfMatch feature from S3
+ * @param fs filesystem
+ * @param path   path to write
+ * @param data source dataset. Can be null
+ * @throws IOException on any problem
+ */
+private static void createFileWithIfNoneMatchFlag(FileSystem fs,
+

[jira] [Commented] (HADOOP-19285) [ABFS] Restore ETAGS_AVAILABLE to abfs path capabilities

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883887#comment-17883887
 ] 

ASF GitHub Bot commented on HADOOP-19285:
-

hadoop-yetus commented on PR #7064:
URL: https://github.com/apache/hadoop/pull/7064#issuecomment-2368363041

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 45s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 20s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7064/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7064 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9958fffb1c52 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 93b66cef92458360943d800bc354e3d6ba437351 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7064/1/testReport/ |
   | Max. process+thread count | 626 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7064/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This m

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883867#comment-17883867
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

hadoop-yetus commented on PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#issuecomment-2368130498

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7062 |
   | JIRA Issue | HADOOP-19284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c1b6e7379045 5.15.0-117-generic #127-Ubuntu SMP Fri Jul 5 
20:13:28 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ad926a39b5aa4910ab5e68e4bb2ba38b5e9584dd |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/2/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> ABFS: Allow "fs.azure.account.hns.enabled" 

[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883798#comment-17883798
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

anmolanmol1234 commented on code in PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#discussion_r1771103483


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java:
##
@@ -271,4 +275,44 @@ private void 
ensureGetAclDetermineHnsStatusAccuratelyInternal(int statusCode,
 Mockito.verify(mockClient, times(1))
 .getAclStatus(anyString(), any(TracingContext.class));
   }
+
+  @Test
+  public void testAccountSpecificConfig() throws Exception {
+Configuration rawConfig = new Configuration();
+rawConfig.addResource(TEST_CONFIGURATION_FILE_NAME);
+rawConfig.unset(FS_AZURE_ACCOUNT_IS_HNS_ENABLED);
+rawConfig.unset(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED,
+this.getAccountName()));
+String accountName1 = "account1.dfs.core.windows.net";
+String accountName2 = "account2.dfs.core.windows.net";
+String accountName3 = "account3.dfs.core.windows.net";
+String defaultUri1 = this.getTestUrl().replace(this.getAccountName(), 
accountName1);
+String defaultUri2 = this.getTestUrl().replace(this.getAccountName(), 
accountName2);
+String defaultUri3 = this.getTestUrl().replace(this.getAccountName(), 
accountName3);
+
+// Set account specific config for account 1
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName1), TRUE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri1);
+AzureBlobFileSystem fs1 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+
+// Set account specific config for account 2
+rawConfig.set(accountProperty(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, 
accountName2), FALSE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri2);
+AzureBlobFileSystem fs2 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+
+// Set account agnostic config for account 3
+rawConfig.set(FS_AZURE_ACCOUNT_IS_HNS_ENABLED, FALSE_STR);
+rawConfig.set(CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY, 
defaultUri3);
+AzureBlobFileSystem fs3 = (AzureBlobFileSystem) 
FileSystem.newInstance(rawConfig);
+
+Assertions.assertThat(getIsNamespaceEnabled(fs1)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isTrue();
+Assertions.assertThat(getIsNamespaceEnabled(fs2)).describedAs(
+"getIsNamespaceEnabled should return true when the "
++ "account specific config is set as true").isFalse();
+Assertions.assertThat(getIsNamespaceEnabled(fs3)).describedAs(

Review Comment:
   This assertion statement should be changed right ? If the config is not set 
it does getAcl to determine the account type





> ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific 
> Config
> ---
>
> Key: HADOOP-19284
> URL: https://issues.apache.org/jira/browse/HADOOP-19284
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0, 3.5.0
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> There are a few reported requirements where users working with multiple file 
> systems need to specify this config either only for some accounts or set it 
> differently for different account.
> ABFS driver today does not allow this to be set as account specific config.
> This Jira is to allow that as a new support.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19284) ABFS: Allow "fs.azure.account.hns.enabled" to be set as Account Specific Config

2024-09-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17883796#comment-17883796
 ] 

ASF GitHub Bot commented on HADOOP-19284:
-

hadoop-yetus commented on PR #7062:
URL: https://github.com/apache/hadoop/pull/7062#issuecomment-2367740779

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  javac  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 12s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_422-8u422-b05-1~20.04-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 12s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  81m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.47 ServerAPI=1.47 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/7062 |
   | JIRA Issue | HADOOP-19284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 098d44d50cf2 5.15.0-116-generic #126-Ubuntu SMP Mon Jul 1 
10:14:24 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ba947aaab557ae8c00696a0e4c2951789970ac19 |
   | Default Java | Private Build-1.8.0_422-8u422-b05-1~20.04-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.24+8-post-Ubuntu-1ubuntu320.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_422-8u422-b05-1~20.04-b05 
|
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/1/testReport/ |
   | Max. process+thread count | 644 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-7062/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> ABFS: Allow "fs.azure.account.hns.enabled" 

  1   2   3   4   5   6   7   8   9   10   >