[jira] [Updated] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager

2024-05-15 Thread Sammi Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sammi Chen updated HADOOP-18851:

Summary: Performance improvement for DelegationTokenSecretManager  (was: 
Performance improvement for DelegationTokenSecretManager.)

> Performance improvement for DelegationTokenSecretManager
> 
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.security.token.Token.(Token.java:67)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.createToken(DelegationTokenManager.java:183)
>  {code}
> We can say that this thread is slow and has blocked remaining all. But 
> following is my observation:
>  
>  # verifyToken() and createPaswword() has been synchronized because one is 
> reading the tokenMap

[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846780#comment-17846780
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

skyskyhu commented on PR #6807:
URL: https://github.com/apache/hadoop/pull/6807#issuecomment-2113786075

   @steveloughran, @jojochuang, @Hexiaoqiao, Can you help review and merge the 
commit when you have free time~ Thanks a lot.




> Change of Codec configuration does not work
> ---
>
> Key: HADOOP-19167
> URL: https://issues.apache.org/jira/browse/HADOOP-19167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompressorType());
> if (compressor == null)
> { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
> ["+codec.getDefaultExtension()+"]"); }
> else {
> compressor.reinit(conf);   //conf is null here
> ..
>  
> Please also refer to my unit test to reproduce the bug. 
> To address this bug, I modified the code to ensure that the configuration is 
> read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846775#comment-17846775
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

hadoop-yetus commented on PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#issuecomment-2113717207

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 01s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 53s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  91m 05s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  40m 21s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   6m 18s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 30s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   9m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 163m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   2m 46s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   2m 32s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   2m 31s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | -1 :x: |  javac  |   2m 31s | 
[/patch-compile-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/artifact/out/patch-compile-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   5m 21s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   2m 13s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  javadoc  |   2m 21s | 
[/patch-javadoc-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  shadedclient  |  34m 05s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   2m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 355m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6789 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 2e28357cc08c 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 28cd50ac8773061969049071eab0c8adca5faa83 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6789/2/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> S3A: option "fs.s3a.performance.flags" to take list of perf

[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846756#comment-17846756
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

hadoop-yetus commented on PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#issuecomment-2113444831

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 57s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  15m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 22s | 
[/patch-mvninstall-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/patch-mvninstall-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  compile  |   0m 36s | 
[/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  javac  |   0m 36s | 
[/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | -1 :x: |  compile  |   0m 35s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | -1 :x: |  javac  |   0m 35s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 45s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 7 new + 5 unchanged - 0 fixed = 12 total (was 5)  
|
   | -1 :x: |  mvnsite  |   0m 26s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6789/3/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu

[jira] [Commented] (HADOOP-18508) support multiple s3a integration test runs on same bucket in parallel

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846743#comment-17846743
 ] 

ASF GitHub Bot commented on HADOOP-18508:
-

hadoop-yetus commented on PR #5081:
URL: https://github.com/apache/hadoop/pull/5081#issuecomment-2113384996

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 17 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 17s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  17m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   4m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  18m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  17m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   4m 49s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 4 new + 159 unchanged - 2 fixed = 163 total (was 
161)  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   4m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 30s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 58s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 268m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5081/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5081 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint 
markdownlint |
   | uname | Linux 8f94e011b5ba 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d2247a411373988606dd7780df6b40ff8f13181b |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk

[jira] [Created] (HADOOP-19179) ABFS: Support FNS Accounts over BlobEndpoint

2024-05-15 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-19179:
--

 Summary: ABFS: Support FNS Accounts over BlobEndpoint
 Key: HADOOP-19179
 URL: https://issues.apache.org/jira/browse/HADOOP-19179
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan
 Fix For: 3.5.0, 3.4.1


As a pre-requisite to deprecating WASB Driver, ABFS Driver will need to match 
FNS account support as intended by WASB driver. This will provide an official 
migrating means for customers still using the legacy driver to ABFS Driver. 

 

Parent Jira for WASB deprecation: [HADOOP-19178] WASB Driver Deprecation and 
eventual removal - ASF JIRA (apache.org)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19178) WASB Driver Deprecation and eventual removal

2024-05-15 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-19178:
---
Description: 
*WASB Driver*

WASB driver was developed to support FNS (FlatNameSpace) Azure Storage 
accounts. FNS accounts do not honor File-Folder syntax. HDFS Folder operations 
hence are mimicked at client side by WASB driver and certain folder operations 
like Rename and Delete can lead to lot of IOPs with client-side enumeration and 
orchestration of rename/delete operation blob by blob. It was not ideal for 
other APIs too as initial checks for path is a file or folder needs to be done 
over multiple metadata calls. These led to a degraded performance.

To provide better service to Analytics customers, Microsoft released ADLS Gen2 
which are HNS (Hierarchical Namespace) , i.e File-Folder aware store. ABFS 
driver was designed to overcome the inherent deficiencies of WASB and customers 
were informed to migrate to ABFS driver.

*Customers who still use the legacy WASB driver and the challenges they face* 

Some of our customers have not migrated to the ABFS driver yet and continue to 
use the legacy WASB driver with FNS accounts.  

These customers face the following challenges: 
 * They cannot leverage the optimizations and benefits of the ABFS driver.
 * They need to deal with the compatibility issues should the files and folders 
were modified with the legacy WASB driver and the ABFS driver concurrently in a 
phased transition situation.
 * There are differences for supported features for FNS and HNS over ABFS Driver
 * In certain cases, they must perform a significant amount of re-work on their 
workloads to migrate to the ABFS driver, which is available only on HNS enabled 
accounts in a fully tested and supported scenario.

*Deprecation plans for WASB*

We are introducing a new feature that will enable the ABFS driver to support 
FNS accounts (over BlobEndpoint) using the ABFS scheme. This feature will 
enable customers to use the ABFS driver to interact with data stored in GPv2 
(General Purpose v2) storage accounts. 

With this feature, the customers who still use the legacy WASB driver will be 
able to migrate to the ABFS driver without much re-work on their workloads. 
They will however need to change the URIs from the WASB scheme to the ABFS 
scheme. 

Once ABFS driver has built FNS support capability to migrate WASB customers, 
WASB driver will be declared deprecated in OSS documentation and marked for 
removal in next major release. This will remove any ambiguity for new customer 
onboards as there will be only one Microsoft driver for Azure Storage and 
migrating customers will get SLA bound support for driver and service, which 
was not guaranteed over WASB.

 We anticipate that this feature will serve as a stepping stone for customers 
to move to HNS enabled accounts with the ABFS driver, which is our recommended 
stack for big data analytics on ADLS Gen2. 

*Any Impact for* *existing customers who are using ADLS Gen2 (HNS enabled 
account) with ABFS driver* *?*

This feature does not impact the existing customers who are using ADLS Gen2 
(HNS enabled account) with ABFS driver.

They do not need to make any changes to their workloads or configurations. They 
will still enjoy the benefits of HNS, such as atomic operations, fine-grained 
access control, scalability, and performance. 

*Official recommendation*

Microsoft continues to recommend all Big Data and Analytics customers to use 
Azure Data Lake Gen2 (ADLS Gen2) using the ABFS driver and will continue to 
optimize this scenario in future, we believe that this new option will help all 
those customers to transition to a supported scenario immediately, while they 
plan to ultimately move to ADLS Gen2 (HNS enabled account).

 *New Authentication options that a WASB to ABFS Driver migrating customer will 
get*

Below auth types that WASB provides will continue to work on the new FNS over 
ABFS Driver over configuration that accepts these SAS types (similar to WASB)
 * SharedKey
 * Account SAS
 * Service/Container SAS

Below authentication types that were not supported by WASB driver but supported 
by ABFS driver will continue to be available for new FNS over ABFS Driver
 * OAuth 2.0 Client Credentials
 * OAuth 2.0: Refresh Token
 * Azure Managed Identity
 * Custom OAuth 2.0 Token Provider

ABFS Driver SAS Token Provider plugin present today for UserDelegation SAS and 
Directly SAS will continue to work only for HNS accounts.

  was:
*WASB Driver*

WASB driver was developed to support FNS (FlatNameSpace) Azure Storage 
accounts. FNS accounts do not honor File-Folder syntax. HDFS Folder operations 
hence are mimicked at client side by WASB driver and certain folder operations 
like Rename and Delete can lead to lot of IOPs with client-side enumeration and 
orchestration of rename/delete operation blob by blob

[jira] [Created] (HADOOP-19178) WASB Driver Deprecation and eventual removal

2024-05-15 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-19178:
--

 Summary: WASB Driver Deprecation and eventual removal
 Key: HADOOP-19178
 URL: https://issues.apache.org/jira/browse/HADOOP-19178
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.4.0
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan
 Fix For: 3.4.1


*WASB Driver*

WASB driver was developed to support FNS (FlatNameSpace) Azure Storage 
accounts. FNS accounts do not honor File-Folder syntax. HDFS Folder operations 
hence are mimicked at client side by WASB driver and certain folder operations 
like Rename and Delete can lead to lot of IOPs with client-side enumeration and 
orchestration of rename/delete operation blob by blob. It was not ideal for 
other APIs too as initial checks for path is a file or folder needs to be done 
over multiple metadata calls. These led to a degraded performance.

 

To provide better service to Analytics customers, Microsoft released ADLS Gen2 
which are HNS (Hierarchical Namespace) , i.e File-Folder aware store. ABFS 
driver was designed to overcome the inherent deficiencies of WASB and customers 
were informed to migrate to ABFS driver.

 

*Customers who still use the legacy WASB driver and the challenges they face* 

Some of our customers have not migrated to the ABFS driver yet and continue to 
use the legacy WASB driver with FNS accounts.  

These customers face the following challenges: 
 *  They cannot leverage the optimizations and benefits of the ABFS driver.
 *  They need to deal with the compatibility issues should the files and 
folders were modified with the legacy WASB driver and the ABFS driver 
concurrently in a phased transition situation.
 *  There are differences for supported features for FNS and HNS over ABFS 
Driver
 *  In certain cases, they must perform a significant amount of re-work on 
their workloads to migrate to the ABFS driver, which is available only on HNS 
enabled accounts in a fully tested and supported scenario.

 ** 

*Deprecation plans for WASB* 

We are introducing a new feature that will enable the ABFS driver to support 
FNS accounts (over BlobEndpoint) using the ABFS scheme. This feature will 
enable customers to use the ABFS driver to interact with data stored in GPv2 
(General Purpose v2) storage accounts. 

With this feature, the customers who still use the legacy WASB driver will be 
able to migrate to the ABFS driver without much re-work on their workloads. 
They will however need to change the URIs from the WASB scheme to the ABFS 
scheme. 

Once ABFS driver has built FNS support capability to migrate WASB customers, 
WASB driver will be declared deprecated in OSS documentation and marked for 
removal in next major release. This will remove any ambiguity for new customer 
onboards as there will be only one Microsoft driver for Azure Storage and 
migrating customers will get SLA bound support for driver and service, which 
was not guaranteed over WASB.

 We anticipate that this feature will serve as a stepping stone for customers 
to move to HNS enabled accounts with the ABFS driver, which is our recommended 
stack for big data analytics on ADLS Gen2. 

*Any Impact for* *existing customers who are using ADLS Gen2 (HNS enabled 
account) with ABFS driver* *?*

This feature does not impact the existing customers who are using ADLS Gen2 
(HNS enabled account) with ABFS driver. 

They do not need to make any changes to their workloads or configurations. They 
will still enjoy the benefits of HNS, such as atomic operations, fine-grained 
access control, scalability, and performance. 

*Official recommendation*

Microsoft continues to recommend all Big Data and Analytics customers to use 
Azure Data Lake Gen2 (ADLS Gen2) using the ABFS driver and will continue to 
optimize this scenario in future, we believe that this new option will help all 
those customers to transition to a supported scenario immediately, while they 
plan to ultimately move to ADLS Gen2 (HNS enabled account).

 *New Authentication options that a WASB to ABFS Driver migrating customer will 
get*

Below auth types that WASB provides will continue to work on the new FNS over 
ABFS Driver over configuration that accepts these SAS types (similar to WASB)
 * SharedKey
 * Account SAS
 * Service/Container SAS

Below authentication types that were not supported by WASB driver but supported 
by ABFS driver will continue to be available for new FNS over ABFS Driver
 * OAuth 2.0 Client Credentials
 * OAuth 2.0: Refresh Token
 * Azure Managed Identity
 * Custom OAuth 2.0 Token Provider

 

ABFS Driver SAS Token Provider plugin present today for UserDelegation SAS and 
Directly SAS will continue to work only for HNS accounts.



--
This message was sent by Atlassian Jira
(v8.20.10#820010

[jira] [Resolved] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-15 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur resolved HADOOP-19013.

Resolution: Fixed

> fs.getXattrs(path) for S3FS doesn't have 
> x-amz-server-side-encryption-aws-kms-key-id header.
> 
>
> Key: HADOOP-19013
> URL: https://issues.apache.org/jira/browse/HADOOP-19013
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Once a path while uploading has been encrypted with SSE-KMS with a key id and 
> then later when we try to read the attributes of the same file, it doesn't 
> contain the key id information as an attribute. should we add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-15 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur updated HADOOP-19013:
---
Fix Version/s: 3.4.1

> fs.getXattrs(path) for S3FS doesn't have 
> x-amz-server-side-encryption-aws-kms-key-id header.
> 
>
> Key: HADOOP-19013
> URL: https://issues.apache.org/jira/browse/HADOOP-19013
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Once a path while uploading has been encrypted with SSE-KMS with a key id and 
> then later when we try to read the attributes of the same file, it doesn't 
> contain the key id information as an attribute. should we add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19177) TestS3ACachingBlockManager fails intermittently in Yetus

2024-05-15 Thread Mukund Thakur (Jira)
Mukund Thakur created HADOOP-19177:
--

 Summary: TestS3ACachingBlockManager fails intermittently in Yetus
 Key: HADOOP-19177
 URL: https://issues.apache.org/jira/browse/HADOOP-19177
 Project: Hadoop Common
  Issue Type: Test
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Mukund Thakur


{code:java}
[ERROR] 
org.apache.hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager.testCachingOfGet 
-- Time elapsed: 60.45 s <<< ERROR!
java.lang.IllegalStateException: waitForCaching: expected: 1, actual: 0, read 
errors: 0, caching errors: 1
at 
org.apache.hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager.waitForCaching(TestS3ACachingBlockManager.java:465)
at 
org.apache.hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager.testCachingOfGetHelper(TestS3ACachingBlockManager.java:435)
at 
org.apache.hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager.testCachingOfGet(TestS3ACachingBlockManager.java:398)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:750)

[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR] 
org.apache.hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager.testCachingFailureOfGet
[ERROR]   Run 1: 
TestS3ACachingBlockManager.testCachingFailureOfGet:405->testCachingOfGetHelper:435->waitForCaching:465
 IllegalState waitForCaching: expected: 1, actual: 0, read errors: 0, caching 
errors: 1
[ERROR]   Run 2: 
TestS3ACachingBlockManager.testCachingFailureOfGet:405->testCachingOfGetHelper:435->waitForCaching:465
 IllegalState waitForCaching: expected: 1, actual: 0, read errors: 0, caching 
errors: 1
[ERROR]   Run 3: 
TestS3ACachingBlockManager.testCachingFailureOfGet:405->testCachingOfGetHelper:435->waitForCaching:465
 IllegalState waitForCaching: expected: 1, actual: 0, read errors: 0, caching 
errors: 1 {code}
Discovered in 
[https://github.com/apache/hadoop/pull/6646#issuecomment-2111558054] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846714#comment-17846714
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

mukund-thakur merged PR #6646:
URL: https://github.com/apache/hadoop/pull/6646




> fs.getXattrs(path) for S3FS doesn't have 
> x-amz-server-side-encryption-aws-kms-key-id header.
> 
>
> Key: HADOOP-19013
> URL: https://issues.apache.org/jira/browse/HADOOP-19013
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>
> Once a path while uploading has been encrypted with SSE-KMS with a key id and 
> then later when we try to read the attributes of the same file, it doesn't 
> contain the key id information as an attribute. should we add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19172) Upgrade aws-java-sdk to 1.12.720

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846707#comment-17846707
 ] 

ASF GitHub Bot commented on HADOOP-19172:
-

steveloughran opened a new pull request, #6829:
URL: https://github.com/apache/hadoop/pull/6829

   
   This is #6823 with an update in LICENSE-binary and full CLI testing
   as the artifact is bundled
   
   Contributed by Steve Loughran
   
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Upgrade aws-java-sdk to 1.12.720
> 
>
> Key: HADOOP-19172
> URL: https://issues.apache.org/jira/browse/HADOOP-19172
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Update to the latest AWS SDK, to stop anyone worrying about the ion library 
> CVE https://nvd.nist.gov/vuln/detail/CVE-2024-21634
> This isn't exposed in the s3a client, but may be used downstream. 
> on v2 sdk releases, the v1 sdk is only used during builds; 3.3.x it is shipped



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19172) Upgrade aws-java-sdk to 1.12.720

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846637#comment-17846637
 ] 

ASF GitHub Bot commented on HADOOP-19172:
-

steveloughran merged PR #6823:
URL: https://github.com/apache/hadoop/pull/6823




> Upgrade aws-java-sdk to 1.12.720
> 
>
> Key: HADOOP-19172
> URL: https://issues.apache.org/jira/browse/HADOOP-19172
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Update to the latest AWS SDK, to stop anyone worrying about the ion library 
> CVE https://nvd.nist.gov/vuln/detail/CVE-2024-21634
> This isn't exposed in the s3a client, but may be used downstream. 
> on v2 sdk releases, the v1 sdk is only used during builds; 3.3.x it is shipped



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-15 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-19073.
-
Resolution: Fixed

> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Assignee: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846636#comment-17846636
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

steveloughran commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-2112568052

   thanks, merged to trunk and updated the JIRA.
   
   If you do a cherrypick of this commit and submit as a PR against branch-3.4 
I'll merge it there too, once yetus is happy




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Assignee: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-15 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-19073:
---

Assignee: xy

> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Assignee: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-15 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19073:

Fix Version/s: 3.5.0

> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Assignee: xy
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846635#comment-17846635
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

steveloughran merged PR #6534:
URL: https://github.com/apache/hadoop/pull/6534




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-15 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19170:

Fix Version/s: 3.4.1

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-15 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846629#comment-17846629
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

steveloughran merged PR #6827:
URL: https://github.com/apache/hadoop/pull/6827




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19176) S3A Xattr headers need hdfs-compatible prefix

2024-05-15 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19176:
---

 Summary: S3A Xattr headers need hdfs-compatible prefix
 Key: HADOOP-19176
 URL: https://issues.apache.org/jira/browse/HADOOP-19176
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.6, 3.4.0
Reporter: Steve Loughran


x3a xattr list needs a prefix compatible with hdfs or existing code which tries 
to copy attributes between stores can break

we need a prefix of {user/trusted/security/system/raw}.

now, problem: currently xattrs are used by the magic committer to propagate 
file size progress; renaming the prefix will break existing code. But as it's 
read only we could modify spark to look for both old and new values.

{code}

org.apache.hadoop.HadoopIllegalArgumentException: An XAttr name must be 
prefixed with user/trusted/security/system/raw, followed by a '.'
at org.apache.hadoop.hdfs.XAttrHelper.buildXAttr(XAttrHelper.java:77) 
at org.apache.hadoop.hdfs.DFSClient.setXAttr(DFSClient.java:2835) 
at 
org.apache.hadoop.hdfs.DistributedFileSystem$59.doCall(DistributedFileSystem.java:3106)
 
at 
org.apache.hadoop.hdfs.DistributedFileSystem$59.doCall(DistributedFileSystem.java:3102)
 
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.setXAttr(DistributedFileSystem.java:3115)
 
at org.apache.hadoop.fs.FileSystem.setXAttr(FileSystem.java:3097)

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-15 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846518#comment-17846518
 ] 

Bilwa S T edited comment on HADOOP-19174 at 5/15/24 7:35 AM:
-

* which hadoop version

We are using hadoop 3.3.6 version with tez 0.10.3 version. Tez job fails with 
the exception i mentioned above in case1. 
 * what happens if you remove the hadoop protobuf-2.5 jar.

Tez job runs successfully if we remove the protobuf-2.5 jar from hadoop 
classpath.

 

[~ayushtkn]  yes hive is our internal version which is 3.x. I will look into it 
again. but did you not face the tez issue i mentioned above with tez 0.10.3 and 
hadoop 3.3.6 ?

 


was (Author: bilwast):
* which hadoop version

We are using hadoop 3.3.6 version with tez 0.10.3 version. Tez job fails with 
the exception i mentioned above in case 1. 
 * what happens if you remove the hadoop protobuf-2.5 jar.

Tez job runs successfully if we remove the protobuf-2.5 jar from hadoop 
classpath.

 

[~ayushtkn]  yes hive is our internal version which is 3.x. I will look into it 
again. but did you not face the tez issue i mentioned above with tez 0.10.3 and 
hadoop 3.3.6 ?

 

> Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath
> --
>
> Key: HADOOP-19174
> URL: https://issues.apache.org/jira/browse/HADOOP-19174
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> There are two issues here:
> *1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has 
> protobuf version 3.21.1*
> Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
> classpath
> {code:java}
> java.lang.IllegalAccessError: class 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
> private field com.google.protobuf.AbstractMessage.memoizedSize 
> (org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
> com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
> at 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
> at 
> com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
> at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
> at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
> at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
> at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
> at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
> at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
> at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
> 2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
> DAGAppMasterShutdownHook invoked
> 2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
> Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
> at 
> org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at java.base/java.lang.Thread.run(Thread.java:840)
> 2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
> ShutdownHook 'DAGAppMasterShutdownHook' failed, 
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerMana

[jira] [Commented] (HADOOP-19163) Upgrade protobuf version to 3.25.3

2024-05-15 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846519#comment-17846519
 ] 

Bilwa S T commented on HADOOP-19163:


Updated the PR with protobuf version 3.25.3. 

> Upgrade protobuf version to 3.25.3
> --
>
> Key: HADOOP-19163
> URL: https://issues.apache.org/jira/browse/HADOOP-19163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-15 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846518#comment-17846518
 ] 

Bilwa S T commented on HADOOP-19174:


* which hadoop version

We are using hadoop 3.3.6 version with tez 0.10.3 version. Tez job fails with 
the exception i mentioned above in case 1. 
 * what happens if you remove the hadoop protobuf-2.5 jar.

Tez job runs successfully if we remove the protobuf-2.5 jar from hadoop 
classpath.

 

[~ayushtkn]  yes hive is our internal version which is 3.x. I will look into it 
again. but did you not face the tez issue i mentioned above with tez 0.10.3 and 
hadoop 3.3.6 ?

 

> Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath
> --
>
> Key: HADOOP-19174
> URL: https://issues.apache.org/jira/browse/HADOOP-19174
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> There are two issues here:
> *1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has 
> protobuf version 3.21.1*
> Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
> classpath
> {code:java}
> java.lang.IllegalAccessError: class 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
> private field com.google.protobuf.AbstractMessage.memoizedSize 
> (org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
> com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
> at 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
> at 
> com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
> at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
> at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
> at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
> at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
> at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
> at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
> at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
> 2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
> DAGAppMasterShutdownHook invoked
> 2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
> Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
> at 
> org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at java.base/java.lang.Thread.run(Thread.java:840)
> 2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
> ShutdownHook 'DAGAppMasterShutdownHook' failed, 
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at 
> org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
> at 
> org.apache.hadoop.util

[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846489#comment-17846489
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

hadoop-yetus commented on PR #6646:
URL: https://github.com/apache/hadoop/pull/6646#issuecomment-2111558054

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 01s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 00s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  90m 06s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 44s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 50s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 144m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  10m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 00s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 157m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  21m 50s | 
[/patch-unit-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6646/5/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt)
 |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   5m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 452m 18s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6646 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 64903e352815 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 9db3c4c095091426cd67ea612c2ab1fd62b9da1b |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6646/5/testReport/
 |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6646/5/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> fs.getXattrs(path) for S3FS doesn't have 
> x-amz-server-side-encryption-aws-kms-key-id header.
> 
>
> Key: HADOOP-19013
> URL: https://issues.apache.org/jira/browse/HADOOP-19013
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>
> Once a path while uploading has been encrypted with SSE-KMS with a key id and 
> then later when we try to read the attributes of the same file, it doesn't 
> contain the key id information as an attribute. should we add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846477#comment-17846477
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

zhengchenyu commented on PR #6827:
URL: https://github.com/apache/hadoop/pull/6827#issuecomment-2111467319

   > thanks, merged to trunk. If you do a PR cherrypicking this against hadoop 
branch-3.4 I'll commit it there as soon as yetus is happy...no need for any 
code review.
   
   Yes, this is only lack of unit test. In this PR, only change conditional 
compilation, the unit test could not take effect
. Just backport https://github.com/apache/hadoop/pull/6822 to branch-3.4. 




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19163) Upgrade protobuf version to 3.25.3

2024-05-14 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated HADOOP-19163:
---
Summary: Upgrade protobuf version to 3.25.3  (was: Upgrade protobuf version 
to 3.24.4)

> Upgrade protobuf version to 3.25.3
> --
>
> Key: HADOOP-19163
> URL: https://issues.apache.org/jira/browse/HADOOP-19163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846459#comment-17846459
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

hadoop-yetus commented on PR #6646:
URL: https://github.com/apache/hadoop/pull/6646#issuecomment-2111294977

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 54s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 128m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6646 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f0434fa12d15 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 9db3c4c095091426cd67ea612c2ab1fd62b9da1b |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/4/testReport/ |
   | Max. process+thread count | 746 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> fs.getXattrs(path) for S3FS doesn't have 
>

[jira] [Commented] (HADOOP-19148) Update solr from 8.11.2 to 8.11.3 to address CVE-2023-50298

2024-05-14 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846458#comment-17846458
 ] 

Viraj Jasani commented on HADOOP-19148:
---

[~brahmareddy] is anyone picking this up? If not, let me create the PR?

> Update solr from 8.11.2 to 8.11.3 to address CVE-2023-50298
> ---
>
> Key: HADOOP-19148
> URL: https://issues.apache.org/jira/browse/HADOOP-19148
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Brahma Reddy Battula
>Priority: Major
>
> Update solr from 8.11.2 to 8.11.3 to address CVE-2023-50298



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19172) Upgrade aws-java-sdk to 1.12.720

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846457#comment-17846457
 ] 

ASF GitHub Bot commented on HADOOP-19172:
-

virajjasani commented on PR #6823:
URL: https://github.com/apache/hadoop/pull/6823#issuecomment-2111241771

   +1 (non-binding)




> Upgrade aws-java-sdk to 1.12.720
> 
>
> Key: HADOOP-19172
> URL: https://issues.apache.org/jira/browse/HADOOP-19172
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Update to the latest AWS SDK, to stop anyone worrying about the ion library 
> CVE https://nvd.nist.gov/vuln/detail/CVE-2024-21634
> This isn't exposed in the s3a client, but may be used downstream. 
> on v2 sdk releases, the v1 sdk is only used during builds; 3.3.x it is shipped



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846407#comment-17846407
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

hadoop-yetus commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2111003939

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 00s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  | 119m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  56m 02s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   6m 47s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   6m 20s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/5/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   6m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 190m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   6m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  51m 03s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  51m 03s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 00s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   7m 14s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   6m 18s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/5/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   6m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 208m 05s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   8m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 652m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6803 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 2348886a150d 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / b506761282fef3aac9d96d04a68cb1a3afe8def1 |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/5/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6803/5/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196

[jira] [Commented] (HADOOP-19139) [ABFS]: No GetPathStatus call for opening AbfsInputStream

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846404#comment-17846404
 ] 

ASF GitHub Bot commented on HADOOP-19139:
-

steveloughran commented on PR #6699:
URL: https://github.com/apache/hadoop/pull/6699#issuecomment-2110977633

   @saxenapranav can you wait until there's a version of #6789 which adds 
something to hadoop common to build an enumset from a comma separated list of 
options? this will allow for easy extension -just add a new enum and a probe, 
and be consistent.
   
   I'll export it in a static method in a new class in org.apache.hadoop.util 
and call from Configuration, so it'll be easy for you to pick up too. 
   
   Ideally we should try for common Enum names too: I'll let you start there 
and copy them in my work




> [ABFS]: No GetPathStatus call for opening AbfsInputStream
> -
>
> Key: HADOOP-19139
> URL: https://issues.apache.org/jira/browse/HADOOP-19139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> Read API gives contentLen and etag of the path. This information would be 
> used in future calls on that inputStream. Prior information of eTag is of not 
> much importance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-18786:
---

Assignee: Christopher Tubbs

> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Assignee: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846400#comment-17846400
 ] 

ASF GitHub Bot commented on HADOOP-18786:
-

steveloughran commented on PR #5789:
URL: https://github.com/apache/hadoop/pull/5789#issuecomment-2110965419

   thanks, in trunk. can you do a pr cherrypicking to branch-3.4 so we can keep 
that in sync. No need for more reviews, just a yetus test run.




> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18786:

Fix Version/s: 3.5.0

> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846399#comment-17846399
 ] 

ASF GitHub Bot commented on HADOOP-18786:
-

steveloughran merged PR #5789:
URL: https://github.com/apache/hadoop/pull/5789




> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846398#comment-17846398
 ] 

ASF GitHub Bot commented on HADOOP-18786:
-

steveloughran commented on PR #5789:
URL: https://github.com/apache/hadoop/pull/5789#issuecomment-2110961454

   ok, let's merge and see what happens.




> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846396#comment-17846396
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

steveloughran commented on PR #6827:
URL: https://github.com/apache/hadoop/pull/6827#issuecomment-2110955296

   thanks, merged to trunk. If you do a PR cherrypicking this against hadoop 
branch-3.4 I'll commit it there as soon as yetus is happy...no need for any 
code review.




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18958) Improve UserGroupInformation debug log

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-18958:
---

Assignee: wangzhihui

>  Improve UserGroupInformation debug log
> ---
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Assignee: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,height=548!
>  
> *improved result* :
>  
> !image-2023-10-29-09-47-56-489.png|width=1099,height=156!
> !20231030-143525.jpeg|width=572,height=674!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18958) Improve UserGroupInformation debug log

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18958.
-
Fix Version/s: 3.5.0
   Resolution: Fixed

>  Improve UserGroupInformation debug log
> ---
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Assignee: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,height=548!
>  
> *improved result* :
>  
> !image-2023-10-29-09-47-56-489.png|width=1099,height=156!
> !20231030-143525.jpeg|width=572,height=674!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18958) Improve UserGroupInformation debug log

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18958:

Summary:  Improve UserGroupInformation debug log  (was: 
UserGroupInformation debug log improve)

>  Improve UserGroupInformation debug log
> ---
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,height=548!
>  
> *improved result* :
>  
> !image-2023-10-29-09-47-56-489.png|width=1099,height=156!
> !20231030-143525.jpeg|width=572,height=674!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-18958:
-

> UserGroupInformation debug log improve
> --
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,height=548!
>  
> *improved result* :
>  
> !image-2023-10-29-09-47-56-489.png|width=1099,height=156!
> !20231030-143525.jpeg|width=572,height=674!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846395#comment-17846395
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

steveloughran merged PR #6255:
URL: https://github.com/apache/hadoop/pull/6255




> UserGroupInformation debug log improve
> --
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,height=548!
>  
> *improved result* :
>  
> !image-2023-10-29-09-47-56-489.png|width=1099,height=156!
> !20231030-143525.jpeg|width=572,height=674!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846394#comment-17846394
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

steveloughran commented on PR #6827:
URL: https://github.com/apache/hadoop/pull/6827#issuecomment-2110942031

   not sure what is up here. lack of a test?




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19172) Upgrade aws-java-sdk to 1.12.720

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846392#comment-17846392
 ] 

ASF GitHub Bot commented on HADOOP-19172:
-

steveloughran commented on PR #6823:
URL: https://github.com/apache/hadoop/pull/6823#issuecomment-2110926534

   @mukund-thakur @ahmarsuhail can I get some review of this. now we don't ship 
this, it's low risk




> Upgrade aws-java-sdk to 1.12.720
> 
>
> Key: HADOOP-19172
> URL: https://issues.apache.org/jira/browse/HADOOP-19172
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Update to the latest AWS SDK, to stop anyone worrying about the ion library 
> CVE https://nvd.nist.gov/vuln/detail/CVE-2024-21634
> This isn't exposed in the s3a client, but may be used downstream. 
> on v2 sdk releases, the v1 sdk is only used during builds; 3.3.x it is shipped



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-19152) Do not hard code security providers.

2024-05-14 Thread Tsz-wo Sze (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz-wo Sze resolved HADOOP-19152.
-
Fix Version/s: 3.5.0
 Hadoop Flags: Reviewed
 Release Note: Added a new conf 
"hadoop.security.crypto.jce.provider.auto-add" (default: true) to 
enable/disable auto-adding BouncyCastleProvider.  This change also avoid 
statically loading the BouncyCastleProvider class.
   Resolution: Fixed

The pull request is now merged.

> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19171) S3A: handle alternative forms of connection failure

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19171:

Description: 
We've had reports of network connection failures surfacing deeper in the stack 
where we don't convert to AWSApiCallTimeoutException so they aren't retried 
properly (retire connection and repeat)


{code}
Unable to execute HTTP request: Broken pipe (Write failed)
{code}


{code}
 Your socket connection to the server was not read from or written to within 
the timeout period. Idle connections will be closed. (Service: Amazon S3; 
Status Code: 400; Error Code: RequestTimeout
{code}

note, this is v1 sdk but the 400 error is treated as fail-fast in all our 
versions and I don't think we do the same for the broken pipe. that one is 
going to be trickier to handle as unless that is coming from the http/tls 
libraries "broken pipe" may not be in the newer builds. We'd have to look for 
the string in the SDKs to see what causes it and go from there



  was:
We've had reports of network connection failures surfacing deeper in the stack 
where we don't convert to AWSApiCallTimeoutException so they aren't retried 
properly (retire connection and repeat)


{code}
Unable to execute HTTP request: Broken pipe (Write failed)
{code}


{code}
 Your socket connection to the server was not read from or written to within 
the timeout period. Idle connections will be closed. (Service: Amazon S3; 
Status Code: 400; Error Code: RequestTimeout
{code}

note, this is v1 sdk but the 400 error is treated as fail-fast in all our 
versoins




> S3A: handle alternative forms of connection failure
> ---
>
> Key: HADOOP-19171
> URL: https://issues.apache.org/jira/browse/HADOOP-19171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Priority: Major
>
> We've had reports of network connection failures surfacing deeper in the 
> stack where we don't convert to AWSApiCallTimeoutException so they aren't 
> retried properly (retire connection and repeat)
> {code}
> Unable to execute HTTP request: Broken pipe (Write failed)
> {code}
> {code}
>  Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout
> {code}
> note, this is v1 sdk but the 400 error is treated as fail-fast in all our 
> versions and I don't think we do the same for the broken pipe. that one is 
> going to be trickier to handle as unless that is coming from the http/tls 
> libraries "broken pipe" may not be in the newer builds. We'd have to look for 
> the string in the SDKs to see what causes it and go from there



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846386#comment-17846386
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

szetszwo commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2110845650

   @steveloughran , thanks a lot for reviewing this!




> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19171) S3A: handle alternative forms of connection failure

2024-05-14 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19171:

Description: 
We've had reports of network connection failures surfacing deeper in the stack 
where we don't convert to AWSApiCallTimeoutException so they aren't retried 
properly (retire connection and repeat)


{code}
Unable to execute HTTP request: Broken pipe (Write failed)
{code}


{code}
 Your socket connection to the server was not read from or written to within 
the timeout period. Idle connections will be closed. (Service: Amazon S3; 
Status Code: 400; Error Code: RequestTimeout
{code}

note, this is v1 sdk but the 400 error is treated as fail-fast in all our 
versoins



  was:
We've had reports of network connection failures surfacing deeper in the stack 
where we don't convert to AWSApiCallTimeoutException so they aren't retried 
properly (retire connection and repeat)


{code}
Unable to execute HTTP request: Broken pipe (Write failed)
{code}


{code}
 Your socket connection to the server was not read from or written to within 
the timeout period. Idle connections will be closed. (Service: Amazon S3; 
Status Code: 400; Error Code: RequestTimeout
{code}




> S3A: handle alternative forms of connection failure
> ---
>
> Key: HADOOP-19171
> URL: https://issues.apache.org/jira/browse/HADOOP-19171
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Priority: Major
>
> We've had reports of network connection failures surfacing deeper in the 
> stack where we don't convert to AWSApiCallTimeoutException so they aren't 
> retried properly (retire connection and repeat)
> {code}
> Unable to execute HTTP request: Broken pipe (Write failed)
> {code}
> {code}
>  Your socket connection to the server was not read from or written to within 
> the timeout period. Idle connections will be closed. (Service: Amazon S3; 
> Status Code: 400; Error Code: RequestTimeout
> {code}
> note, this is v1 sdk but the 400 error is treated as fail-fast in all our 
> versoins



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846385#comment-17846385
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

szetszwo merged PR #6739:
URL: https://github.com/apache/hadoop/pull/6739




> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846384#comment-17846384
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

szetszwo commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2110842717

   > continuous-integration/jenkins/pr-head Pending — This commit is being built
   
   This has been stuck by `Windows Batch Script` for more than 22 hours.  Since 
this passed all the GitHub actions previously and got +1 from Yetus, I will 
merge this without further waiting.




> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19154) upgrade bouncy castle to 1.78.1 due to CVEs

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846383#comment-17846383
 ] 

ASF GitHub Bot commented on HADOOP-19154:
-

ayushtkn commented on PR #6755:
URL: https://github.com/apache/hadoop/pull/6755#issuecomment-2110839627

   @pjfanning the result is from windows build which doesn't run tests & some 
issues with mvn site, the actual build result for your PR is here:
   
https://ci-hadoop.apache.org/blue/organizations/jenkins/hadoop-multibranch/detail/PR-6755/5/pipeline
   
   It crashed or timed out before giving you the result
   
   For future:
   https://github.com/apache/hadoop/assets/25608848/ebb64295-36c8-4ac5-9940-e1b097bc0257;>
   there are two links one is the windows one & other one is the normal one, 
you can check the normal one




> upgrade bouncy castle to 1.78.1 due to CVEs
> ---
>
> Key: HADOOP-19154
> URL: https://issues.apache.org/jira/browse/HADOOP-19154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> [https://www.bouncycastle.org/releasenotes.html#r1rv78]
> There is a v1.78.1 release but no notes for it yet.
> For v1.78
> h3. 2.1.5 Security Advisories.
> Release 1.78 deals with the following CVEs:
>  * CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
> parameters can cause high CPU usage during parameter evaluation.
>  * CVE-2024-30171 - Possible timing based leakage in RSA based handshakes due 
> to exception processing eliminated.
>  * CVE-2024-30172 - Crafted signature and public key can be used to trigger 
> an infinite loop in the Ed25519 verification code.
>  * CVE-2024-301XX - When endpoint identification is enabled and an SSL socket 
> is not created with an explicit hostname (as happens with 
> HttpsURLConnection), hostname verification could be performed against a 
> DNS-resolved IP address. This has been fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846352#comment-17846352
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

hadoop-yetus commented on PR #6255:
URL: https://github.com/apache/hadoop/pull/6255#issuecomment-2110542796

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 00s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  spotbugs  |   0m 01s |  |  spotbugs executables are not 
available.  |
   | +0 :ok: |  codespell  |   0m 01s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 01s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m 00s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m 00s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  93m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  41m 35s |  |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   4m 44s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |   4m 29s | 
[/branch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/4/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in trunk failed.  |
   | +1 :green_heart: |  javadoc  |   4m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 148m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   5m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  41m 08s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  41m 08s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 56s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   4m 41s | 
[/patch-mvnsite-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/4/artifact/out/patch-mvnsite-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   5m 01s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 163m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   6m 01s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 507m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | MINGW64_NT-10.0-17763 a93058574dc4 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / add361c645e22513a75a6401a4328afdeca77aba |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/4/testReport/
 |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6255/4/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> UserGroupInformation debug log improve
> --
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,heig

[jira] [Comment Edited] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846307#comment-17846307
 ] 

Ayush Saxena edited comment on HADOOP-19174 at 5/14/24 1:02 PM:


Additional questions:
 * Which Hive version? Hive 4.x only supports hadoop-3.3.6
 * Does this repro on "Apache" Hive 4.x, if yes, can you share which query. We 
have tests in place in Hive, where things are working with hadoop-3.3.6 & tez 
0.10.3, if it doesn't repro there, then "not our mess", if any other apache 
version -> "we don't support that", if any internal repo -> "their mess"
 * From your exception

{noformat}
java.lang.NoClassDefFoundError: com/google/protobuf/ServiceException at 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convert(PBHelperClient.java:807)
 at 
{noformat}
It say {{com/google/protobuf/ServiceException}} not found that means 
{{protobuf-2.5.0}} isn't there in classpath, if it was there that class would 
have been found.

{{ProtobufHelper}} is in {{Hadoop-Common}} and it does have "ServiceException(" 
org.apache.hadoop.thirdparty.protobuf.ServiceException")" but it is the 
{{shaded}} one, but your application is looking for the {{non-shaded}} one.

Most probably there are *multiple hadoop jars in the classpath, One from Tez 
which is Hadoop 3.3.x line and the other is from Hive*, I am pretty sure your 
Hive is either some patched version on top of 3.x or some Apache 3.x line which 
is using putting hadoop-3.1.0 jars in the classpath & tez is pulling in 
Hadoop-3.3.6 jars & the combination is messing up things


was (Author: ayushtkn):
Additional questions:
 * Which Hive version? Hive 4.x only supports hadoop-3.3.6
 * Does this repro on "Apache" Hive 4.x, if yes, can you share which query. We 
have tests in place in Hive, where things are working with hadoop-3.3.6 & tez 
0.10.3, if it doesn't repro there, then "not our mess", if any other apache 
version -> we don't support that, if any internal repo -> "there mess"
 * From your exception

{noformat}
java.lang.NoClassDefFoundError: com/google/protobuf/ServiceException at 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convert(PBHelperClient.java:807)
 at 
{noformat}
It say {{com/google/protobuf/ServiceException}} not found that means 
{{protobuf-2.5.0}} isn't there in classpath, if it was there that class would 
have been found.

{{ProtobufHelper}} is in {{Hadoop-Common}} and it does have "ServiceException(" 
org.apache.hadoop.thirdparty.protobuf.ServiceException")" but it is the 
{{shaded}} one, but your application is looking for the {{non-shaded}} one.

Most probably there are *multiple hadoop jars in the classpath, One from Tez 
which is Hadoop 3.3.x line and the other is from Hive*, I am pretty sure your 
Hive is either some patched version on top of 3.x or some Apache 3.x line which 
is using putting hadoop-3.1.0 jars in the classpath & tez is pulling in 
Hadoop-3.3.6 jars & the combination is messing up things

> Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath
> --
>
> Key: HADOOP-19174
> URL: https://issues.apache.org/jira/browse/HADOOP-19174
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> There are two issues here:
> *1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has 
> protobuf version 3.21.1*
> Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
> classpath
> {code:java}
> java.lang.IllegalAccessError: class 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
> private field com.google.protobuf.AbstractMessage.memoizedSize 
> (org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
> com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
> at 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
> at 
> com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
> at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
> at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
> at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
> at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
> at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:712)

[jira] [Commented] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846307#comment-17846307
 ] 

Ayush Saxena commented on HADOOP-19174:
---

Additional questions:
 * Which Hive version? Hive 4.x only supports hadoop-3.3.6
 * Does this repro on "Apache" Hive 4.x, if yes, can you share which query. We 
have tests in place in Hive, where things are working with hadoop-3.3.6 & tez 
0.10.3, if it doesn't repro there, then "not our mess", if any other apache 
version -> we don't support that, if any internal repo -> "there mess"
 * From your exception

{noformat}
java.lang.NoClassDefFoundError: com/google/protobuf/ServiceException at 
org.apache.hadoop.hdfs.protocolPB.PBHelperClient.convert(PBHelperClient.java:807)
 at 
{noformat}
It say {{com/google/protobuf/ServiceException}} not found that means 
{{protobuf-2.5.0}} isn't there in classpath, if it was there that class would 
have been found.

{{ProtobufHelper}} is in {{Hadoop-Common}} and it does have "ServiceException(" 
org.apache.hadoop.thirdparty.protobuf.ServiceException")" but it is the 
{{shaded}} one, but your application is looking for the {{non-shaded}} one.

Most probably there are *multiple hadoop jars in the classpath, One from Tez 
which is Hadoop 3.3.x line and the other is from Hive*, I am pretty sure your 
Hive is either some patched version on top of 3.x or some Apache 3.x line which 
is using putting hadoop-3.1.0 jars in the classpath & tez is pulling in 
Hadoop-3.3.6 jars & the combination is messing up things

> Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath
> --
>
> Key: HADOOP-19174
> URL: https://issues.apache.org/jira/browse/HADOOP-19174
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> There are two issues here:
> *1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has 
> protobuf version 3.21.1*
> Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
> classpath
> {code:java}
> java.lang.IllegalAccessError: class 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
> private field com.google.protobuf.AbstractMessage.memoizedSize 
> (org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
> com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
> at 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
> at 
> com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
> at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
> at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
> at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
> at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
> at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
> at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
> at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
> 2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
> DAGAppMasterShutdownHook invoked
> 2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
> Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
> at 
> org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExec

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846296#comment-17846296
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

ChenSammi commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2110134515

   @Hexiaoqiao , thanks for the info about mvnsite. Current, hadoop-yetus is 
passed. This "Apache Yetus", I never see it's all green. Should this github CI 
be all green before the PR can be committed?  
   




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.security.token.Token.(Token.java:67)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.createToken(DelegationTokenManager.java:1

[jira] [Commented] (HADOOP-13147) Constructors must not call overrideable methods

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846295#comment-17846295
 ] 

ASF GitHub Bot commented on HADOOP-13147:
-

hadoop-yetus commented on PR #6408:
URL: https://github.com/apache/hadoop/pull/6408#issuecomment-2110134328

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  35m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  16m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 39s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 237m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6408 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 00af0b45d858 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / beffc86f9e86e135629be3276f28f2489b87f210 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/4/testReport/ |
   | Max. process+thread count | 1249 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6408/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Constructors m

[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846292#comment-17846292
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

hiwangzhihui commented on PR #6255:
URL: https://github.com/apache/hadoop/pull/6255#issuecomment-2110091932

   hi, @steveloughran If you have time, please review it again.  Thanks!




> UserGroupInformation debug log improve
> --
>
> Key: HADOOP-18958
> URL: https://issues.apache.org/jira/browse/HADOOP-18958
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.3.0, 3.3.5
>Reporter: wangzhihui
>Priority: Minor
>  Labels: pull-request-available
> Attachments: 20231029-122825-1.jpeg, 20231029-122825.jpeg, 
> 20231030-143525.jpeg, image-2023-10-29-09-47-56-489.png, 
> image-2023-10-30-14-35-11-161.png
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
>       Using “new Exception( )” to print the call stack of "doAs Method " in 
> the UserGroupInformation class. Using this way will print meaningless 
> Exception information and too many call stacks, This is not conducive to 
> troubleshooting
> *example:*
> !20231029-122825.jpeg|width=991,height=548!
>  
> *improved result* :
>  
> !image-2023-10-29-09-47-56-489.png|width=1099,height=156!
> !20231030-143525.jpeg|width=572,height=674!



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846285#comment-17846285
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

hadoop-yetus commented on PR #6827:
URL: https://github.com/apache/hadoop/pull/6827#issuecomment-2110040679

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   7m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 33s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  10m 50s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   9m 10s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  shadedclient  |  73m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  cc  |   8m 18s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   8m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  cc  |   7m 54s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   7m 54s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m 54s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 36s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 139m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6827/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6827 |
   | Optional Tests | dupname asflicense compile cc mvnsite javac unit 
codespell detsecrets golang |
   | uname | Linux d584f947619d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 382212da670f3a534ed0303b0b82c780a9d60a99 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6827/1/testReport/ |
   | Max. process+thread count | 3152 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6827/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Cheny

[jira] [Commented] (HADOOP-18958) UserGroupInformation debug log improve

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846273#comment-17846273
 ] 

ASF GitHub Bot commented on HADOOP-18958:
-

hadoop-yetus commented on PR #6255:
URL: https://github.com/apache/hadoop/pull/6255#issuecomment-2109938623

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  19m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  42m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  19m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |  18m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   2m 47s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  42m 21s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 256m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6255 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e07dfb195c08 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / add361c645e22513a75a6401a4328afdeca77aba |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/6/testReport/ |
   | Max. process+thread count | 1455 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6255/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> UserGroupInformat

[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846268#comment-17846268
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

hadoop-yetus commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2109897935

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   9m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 23 unchanged - 
1 fixed = 23 total (was 24)  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 28s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6803 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux fa3112f0de77 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b506761282fef3aac9d96d04a68cb1a3afe8def1 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/5/testReport/ |
   | Max. process+thread count | 1775 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6803/5

[jira] [Created] (HADOOP-19175) update s3a committer docs

2024-05-14 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19175:
---

 Summary: update s3a committer docs
 Key: HADOOP-19175
 URL: https://issues.apache.org/jira/browse/HADOOP-19175
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran


Update s3a committer docs

* declare that magic committer is stable and make it the recommended one
* show how to use new command "mapred successfile" to print the success file.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846251#comment-17846251
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

zhengchenyu commented on PR #6822:
URL: https://github.com/apache/hadoop/pull/6822#issuecomment-2109767782

   @steveloughran OK, I add https://github.com/apache/hadoop/pull/6827 for 
backport to branch-3.4.




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846250#comment-17846250
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

zhengchenyu opened a new pull request, #6827:
URL: https://github.com/apache/hadoop/pull/6827

   Backport https://github.com/apache/hadoop/pull/6822 to branch-3.4.




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846232#comment-17846232
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

steveloughran commented on PR #6822:
URL: https://github.com/apache/hadoop/pull/6822#issuecomment-2109677169

   can we pull into branch-3.4?




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846230#comment-17846230
 ] 

Steve Loughran commented on HADOOP-19174:
-

* which hadoop version
 * what happens if you remove the hadoop protobuf-2.5 jar.

it should be cuttable from 3.4.0 unless you need the hbase 1 timeline server. 
If you are using an older release, upgrade first

> Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath
> --
>
> Key: HADOOP-19174
> URL: https://issues.apache.org/jira/browse/HADOOP-19174
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> There are two issues here:
> *1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has 
> protobuf version 3.21.1*
> Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
> classpath
> {code:java}
> java.lang.IllegalAccessError: class 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
> private field com.google.protobuf.AbstractMessage.memoizedSize 
> (org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
> com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
> at 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
> at 
> com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
> at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
> at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
> at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
> at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
> at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
> at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
> at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
> 2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
> DAGAppMasterShutdownHook invoked
> 2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
> Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
> at 
> org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at java.base/java.lang.Thread.run(Thread.java:840)
> 2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
> ShutdownHook 'DAGAppMasterShutdownHook' failed, 
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at 
> org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
> at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
> Caused by: java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is

[jira] [Commented] (HADOOP-19163) Upgrade protobuf version to 3.24.4

2024-05-14 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846226#comment-17846226
 ] 

Bilwa S T commented on HADOOP-19163:


[~ste...@apache.org] sure, i will update my PR

> Upgrade protobuf version to 3.24.4
> --
>
> Key: HADOOP-19163
> URL: https://issues.apache.org/jira/browse/HADOOP-19163
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated HADOOP-19174:
---
Description: 
There are two issues here:

*1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has protobuf 
version 3.21.1*

Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
classpath
{code:java}
java.lang.IllegalAccessError: class 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
private field com.google.protobuf.AbstractMessage.memoizedSize 
(org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
at 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
at com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at 
org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
DAGAppMasterShutdownHook invoked
2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
ShutdownHook 'DAGAppMasterShutdownHook' failed, 
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
Caused by: java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840){code}
*2. Run Hive having protobuf 3.24.4 with hadoop 3.3.6*

Containers fail with below exception :

 
{code:java}
2024-04-20 13:23:28,008 [INFO] [Dispatcher thread
{Central}
] |container.AMContainerImpl|: Container 
container_e02_1713455139547_0111_01_04 exited with diagnostics set to 
Container failed

[jira] [Commented] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846200#comment-17846200
 ] 

Bilwa S T commented on HADOOP-19174:


cc [~ayushsaxena] [~ste...@apache.org] 

> Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath
> --
>
> Key: HADOOP-19174
> URL: https://issues.apache.org/jira/browse/HADOOP-19174
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> There are two issues here:
> *1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has 
> protobuf version 3.21.1*
> Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
> classpath
> {code:java}
> java.lang.IllegalAccessError: class 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
> private field com.google.protobuf.AbstractMessage.memoizedSize 
> (org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
> com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
> at 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
> at 
> com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
> at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
> at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
> at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
> at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
> at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
> at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
> at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
> 2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
> DAGAppMasterShutdownHook invoked
> 2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
> Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
> at 
> org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at java.base/java.lang.Thread.run(Thread.java:840)
> 2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
> ShutdownHook 'DAGAppMasterShutdownHook' failed, 
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at 
> org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
> at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
> Caused by: java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
> at org.apache

[jira] [Commented] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Bilwa S T (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846198#comment-17846198
 ] 

Bilwa S T commented on HADOOP-19174:


To resolve this either we need to upgrade the google's protobuf version on the 
hadoop side or downstream projects have to start using shaded protobuf version.

> Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath
> --
>
> Key: HADOOP-19174
> URL: https://issues.apache.org/jira/browse/HADOOP-19174
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Bilwa S T
>Assignee: Bilwa S T
>Priority: Major
>
> There are two issues here:
> *1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has 
> protobuf version 3.21.1*
> Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
> classpath
> {code:java}
> java.lang.IllegalAccessError: class 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
> private field com.google.protobuf.AbstractMessage.memoizedSize 
> (org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
> com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
> at 
> org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
> at 
> com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
> at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
> at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
> at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
> at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
> at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
> at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
> at 
> java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
> at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
> at 
> org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
> at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
> 2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
> DAGAppMasterShutdownHook invoked
> 2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
> Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
> java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
> at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
> at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
> at 
> org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
> at 
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> at 
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> at java.base/java.lang.Thread.run(Thread.java:840)
> 2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
> ShutdownHook 'DAGAppMasterShutdownHook' failed, 
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> java.util.concurrent.ExecutionException: java.lang.NullPointerException: 
> Cannot invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" 
> because "this.taskSchedulerManager" is null
> at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
> at 
> org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
> at 
> org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
> Caused by: java.lang.NullPointerException: Cannot invoke 
> "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
> "this.taskSchedulerManager" is null
> at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMast

[jira] [Updated] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated HADOOP-19174:
---
Description: 
There are two issues here:

*1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has protobuf 
version 3.21.1*

Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
classpath
{code:java}
java.lang.IllegalAccessError: class 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
private field com.google.protobuf.AbstractMessage.memoizedSize 
(org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
at 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
at com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at 
org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
DAGAppMasterShutdownHook invoked
2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
ShutdownHook 'DAGAppMasterShutdownHook' failed, 
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
Caused by: java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840){code}
*2. Run Hive having protobuf 3.24.4 with hadoop 3.3.6*

Containers fail with below exception :

 
{code:java}
2024-04-20 13:23:28,008 [INFO] [Dispatcher thread
{Central}
] |container.AMContainerImpl|: Container 
container_e02_1713455139547_0111_01_04 exited with diagnostics set to 
Container failed

[jira] [Updated] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Bilwa S T (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilwa S T updated HADOOP-19174:
---
Description: 
There are two issues here:

*1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has protobuf 
version 3.21.1*

Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
classpath
{code:java}
java.lang.IllegalAccessError: class 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
private field com.google.protobuf.AbstractMessage.memoizedSize 
(org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
at 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
at com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
at org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at 
org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
DAGAppMasterShutdownHook invoked
2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
ShutdownHook 'DAGAppMasterShutdownHook' failed, 
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
Caused by: java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840){code}
2. Run Hive having protobuf 3.24.4 with hadoop 3.3.6

Containers fail with below exception :

 
{code:java}
2024-04-20 13:23:28,008 [INFO] [Dispatcher thread
{Central}
] |container.AMContainerImpl|: Container 
container_e02_1713455139547_0111_01_04 exited with diagnostics set to 
Container failed

[jira] [Created] (HADOOP-19174) Tez and hive jobs fail due to google's protobuf 2.5.0 in classpath

2024-05-14 Thread Bilwa S T (Jira)
Bilwa S T created HADOOP-19174:
--

 Summary: Tez and hive jobs fail due to google's protobuf 2.5.0 in 
classpath
 Key: HADOOP-19174
 URL: https://issues.apache.org/jira/browse/HADOOP-19174
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Bilwa S T
Assignee: Bilwa S T


There are two issues here:

1. We are running tez 0.10.3 which uses hadoop 3.3.6 version. Tez has protobuf 
version 3.21.1

Below is the exception we get. This is due to protobuf-2.5.0 in our hadoop 
classpath

java.lang.IllegalAccessError: class 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto tried to access 
private field com.google.protobuf.AbstractMessage.memoizedSize 
(org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto and 
com.google.protobuf.AbstractMessage are in unnamed module of loader 'app')
at 
org.apache.tez.dag.api.records.DAGProtos$ConfigurationProto.getSerializedSize(DAGProtos.java:21636)
at 
com.google.protobuf.AbstractMessageLite.writeTo(AbstractMessageLite.java:75)
at org.apache.tez.common.TezUtils.writeConfInPB(TezUtils.java:170)
at org.apache.tez.common.TezUtils.createByteStringFromConf(TezUtils.java:83)
at 
org.apache.tez.common.TezUtils.createUserPayloadFromConf(TezUtils.java:101)
at org.apache.tez.dag.app.DAGAppMaster.serviceInit(DAGAppMaster.java:436)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.tez.dag.app.DAGAppMaster$9.run(DAGAppMaster.java:2600)
at 
java.base/java.security.AccessController.doPrivileged(AccessController.java:712)
at java.base/javax.security.auth.Subject.doAs(Subject.java:439)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1899)
at 
org.apache.tez.dag.app.DAGAppMaster.initAndStartAppMaster(DAGAppMaster.java:2597)
at org.apache.tez.dag.app.DAGAppMaster.main(DAGAppMaster.java:2384)
2024-04-18 16:27:54,741 [INFO] [shutdown-hook-0] |app.DAGAppMaster|: 
DAGAppMasterShutdownHook invoked
2024-04-18 16:27:54,743 [INFO] [shutdown-hook-0] |service.AbstractService|: 
Service org.apache.tez.dag.app.DAGAppMaster failed in state STOPPED
java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-04-18 16:27:54,744 [WARN] [Thread-2] |util.ShutdownHookManager|: 
ShutdownHook 'DAGAppMasterShutdownHook' failed, 
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
java.util.concurrent.ExecutionException: java.lang.NullPointerException: Cannot 
invoke "org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.hadoop.util.ShutdownHookManager.executeShutdown(ShutdownHookManager.java:124)
at 
org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:95)
Caused by: java.lang.NullPointerException: Cannot invoke 
"org.apache.tez.dag.app.rm.TaskSchedulerManager.initiateStop()" because 
"this.taskSchedulerManager" is null
at org.apache.tez.dag.app.DAGAppMaster.initiateStop(DAGAppMaster.java:2111)
at org.apache.tez.dag.app.DAGAppMaster.serviceStop(DAGAppMaster.java:2126)
at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:220)
at 
org.apache.tez.dag.app.DAGAppMaster$DAGAppMasterShutdownHook.run(DAGAppMaster.java:2432)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.ja

[jira] [Commented] (HADOOP-19172) Upgrade aws-java-sdk to 1.12.720

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846173#comment-17846173
 ] 

ASF GitHub Bot commented on HADOOP-19172:
-

hadoop-yetus commented on PR #6823:
URL: https://github.com/apache/hadoop/pull/6823#issuecomment-2109293902

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m 00s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m 00s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m 00s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  shellcheck  |   0m 01s |  |  Shellcheck was not available.  |
   | +0 :ok: |  shelldocs  |   0m 01s |  |  Shelldocs was not available.  |
   | +0 :ok: |  xmllint  |   0m 00s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m 01s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m 00s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 15s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  | 121m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  59m 46s |  |  trunk passed  |
   | -1 :x: |  mvnsite  |  36m 04s | 
[/branch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6823/1/artifact/out/branch-mvnsite-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  javadoc  |  24m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  | 435m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 06s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  | 109m 07s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  51m 05s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  51m 05s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m 01s |  |  The patch has no blanks 
issues.  |
   | -1 :x: |  mvnsite  |  30m 19s | 
[/patch-mvnsite-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6823/1/artifact/out/patch-mvnsite-root.txt)
 |  root in the patch failed.  |
   | +1 :green_heart: |  javadoc  |  22m 15s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  | 258m 04s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |  10m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 882m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6823 |
   | Optional Tests | dupname asflicense codespell detsecrets shellcheck 
shelldocs compile javac javadoc mvninstall mvnsite unit shadedclient xmllint |
   | uname | MINGW64_NT-10.0-17763 4becb5fa56c6 3.4.10-87d57229.x86_64 
2024-02-14 20:17 UTC x86_64 Msys |
   | Build tool | maven |
   | Personality | /c/hadoop/dev-support/bin/hadoop.sh |
   | git revision | trunk / 0bf5068a0c6e0b9b73699e738caf0cc1a1656e6a |
   | Default Java | Azul Systems, Inc.-1.8.0_332-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6823/1/testReport/
 |
   | modules | C: hadoop-project . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch-windows-10/job/PR-6823/1/console
 |
   | versions | git=2.44.0.windows.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Upgrade aws-java-sdk to 1.12.720
> 
>
> Key: HADOOP-19172
> URL: https://issues.apache.org/jira/browse/HADOOP-19172
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, fs/s3
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Update to the latest AWS SDK, to stop anyone worrying about the ion library 
> CVE https://nvd.nist.gov/vuln/detail/CVE-2024-21634
> This isn't exposed in the s3a client, but may be used downstream. 
> on v2 sdk releases, the v1 sdk is only used during builds; 3.3.x it is shipped



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

--

[jira] [Commented] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-13 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846162#comment-17846162
 ] 

Xiaoqiao He commented on HADOOP-19156:
--

Add [~inVisible] to contributor list and assign this ticket to her/him.

> ZooKeeper based state stores use different ZK address configs
> -
>
> Key: HADOOP-19156
> URL: https://issues.apache.org/jira/browse/HADOOP-19156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liu bin
>Assignee: liu bin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the Zookeeper-based state stores of RM, YARN Federation, and HDFS 
> Federation use the same ZK address config `{{{}hadoop.zk.address`{}}}. But in 
> our production environment, we hope that different services can use different 
> ZKs to avoid mutual influence.
> This jira adds separate ZK address configs for each service.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19156) ZooKeeper based state stores use different ZK address configs

2024-05-13 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He reassigned HADOOP-19156:


Assignee: liu bin

> ZooKeeper based state stores use different ZK address configs
> -
>
> Key: HADOOP-19156
> URL: https://issues.apache.org/jira/browse/HADOOP-19156
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: liu bin
>Assignee: liu bin
>Priority: Major
>  Labels: pull-request-available
>
> Currently, the Zookeeper-based state stores of RM, YARN Federation, and HDFS 
> Federation use the same ZK address config `{{{}hadoop.zk.address`{}}}. But in 
> our production environment, we hope that different services can use different 
> ZKs to avoid mutual influence.
> This jira adds separate ZK address configs for each service.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18851) Performance improvement for DelegationTokenSecretManager.

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18851?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846160#comment-17846160
 ] 

ASF GitHub Bot commented on HADOOP-18851:
-

Hexiaoqiao commented on PR #6803:
URL: https://github.com/apache/hadoop/pull/6803#issuecomment-2109216164

   > the mvnsite failure is, looks like not relevant, but other merged MR seems 
doesn't have this failure.
   
   Hi @ChenSammi, @vikaskr22 `mvnsite` failure is not related to this PR, it 
has been followed-up by another thread.




> Performance improvement for DelegationTokenSecretManager.
> -
>
> Key: HADOOP-18851
> URL: https://issues.apache.org/jira/browse/HADOOP-18851
> Project: Hadoop Common
>  Issue Type: Task
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Vikas Kumar
>Assignee: Vikas Kumar
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: 
> 0001-HADOOP-18851-Perfm-improvement-for-ZKDT-management.patch, Screenshot 
> 2023-08-16 at 5.36.57 PM.png
>
>
> *Context:*
> KMS depends on hadoop-common for DT management. Recently we were analysing 
> one performance issue and following is out findings:
>  # Around 96% (196 out of 200) KMS container threads were in BLOCKED state at 
> following:
>  ## *AbstractDelegationTokenSecretManager.verifyToken()*
>  ## *AbstractDelegationTokenSecretManager.createPassword()* 
>  # And then process crashed.
>  
> {code:java}
> http-nio-9292-exec-200PRIORITY : 5THREAD ID : 0X7F075C157800NATIVE ID : 
> 0X2C87FNATIVE ID (DECIMAL) : 182399STATE : BLOCKED
> stackTrace:
> java.lang.Thread.State: BLOCKED (on object monitor)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.verifyToken(AbstractDelegationTokenSecretManager.java:474)
> - waiting to lock <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.verifyToken(DelegationTokenManager.java:213)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:396)
> at  {code}
> All the 199 out of 200 were blocked at above point.
> And the lock they are waiting for is acquired by a thread that was trying to 
> createPassword and publishing the same on ZK.
>  
> {code:java}
> stackTrace:
> java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> at java.lang.Object.wait(Object.java:502)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1598)
> - locked <0x000749263ec0> (a org.apache.zookeeper.ClientCnxn$Packet)
> at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1570)
> at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:2235)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:398)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl$7.call(SetDataBuilderImpl.java:385)
> at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:93)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:382)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:358)
> at 
> org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:36)
> at 
> org.apache.curator.framework.recipes.shared.SharedValue.trySetValue(SharedValue.java:201)
> at 
> org.apache.curator.framework.recipes.shared.SharedCount.trySetCount(SharedCount.java:116)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrSharedCount(ZKDelegationTokenSecretManager.java:586)
> at 
> org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.incrementDelegationTokenSeqNum(ZKDelegationTokenSecretManager.java:601)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:402)
> - locked <0x0005f2f545e8> (a 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager$ZKSecretManager)
> at 
> org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.createPassword(AbstractDelegationTokenSecretManager.java:48)
> at org.apache.hadoop.security.token.Token.(Token.java:67)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenManager.createToken(DelegationT

[jira] [Resolved] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-19170.
--
Fix Version/s: 3.5.0
   Resolution: Fixed

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846145#comment-17846145
 ] 

ASF GitHub Bot commented on HADOOP-19170:
-

jojochuang merged PR #6822:
URL: https://github.com/apache/hadoop/pull/6822




> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread Chenyu Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846134#comment-17846134
 ] 

Chenyu Zheng commented on HADOOP-19170:
---

[~weichiu] I have add the environment. Any OS that does not work with glibc may 
not compile. In fact, I only use Mac to develop, the OS of our production is 
CentOS-7. To me, compiling on Mac just makes development and testing easier. I 
discovered this problem while testing the problem described in HDFS-17521.
 

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread Chenyu Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chenyu Zheng updated HADOOP-19170:
--
Environment: 
OS:  macOS Catalina 10.15.7

compiler: clang 12.0.0

cmake: 3.24.0

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
> Environment: OS:  macOS Catalina 10.15.7
> compiler: clang 12.0.0
> cmake: 3.24.0
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19167) Change of Codec configuration does not work

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846126#comment-17846126
 ] 

ASF GitHub Bot commented on HADOOP-19167:
-

skyskyhu commented on PR #6807:
URL: https://github.com/apache/hadoop/pull/6807#issuecomment-2109118290

   @ferhui @adoroszlai @goiri  Can you help review and merge the commit when 
you have free time~ Thanks a lot. 




> Change of Codec configuration does not work
> ---
>
> Key: HADOOP-19167
> URL: https://issues.apache.org/jira/browse/HADOOP-19167
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: compress
>Reporter: Zhikai Hu
>Priority: Minor
>  Labels: pull-request-available
>
> In one of my projects, I need to dynamically adjust compression level for 
> different files. 
> However, I found that in most cases the new compression level does not take 
> effect as expected, the old compression level continues to be used.
> Here is the relevant code snippet:
> ZStandardCodec zStandardCodec = new ZStandardCodec();
> zStandardCodec.setConf(conf);
> conf.set("io.compression.codec.zstd.level", "5"); // level may change 
> dynamically
> conf.set("io.compression.codec.zstd", zStandardCodec.getClass().getName());
> writer = SequenceFile.createWriter(conf, 
> SequenceFile.Writer.file(sequenceFilePath),
>                                 
> SequenceFile.Writer.keyClass(LongWritable.class),
>                                 
> SequenceFile.Writer.valueClass(BytesWritable.class),
>                                 
> SequenceFile.Writer.compression(CompressionType.BLOCK));
> The reason is SequenceFile.Writer.init() method will call 
> CodecPool.getCompressor(codec, null) to get a compressor. 
> If the compressor is a reused instance, the conf is not applied because it is 
> passed as null:
> public static Compressor getCompressor(CompressionCodec codec, Configuration 
> conf) {
> Compressor compressor = borrow(compressorPool, codec.getCompressorType());
> if (compressor == null)
> { compressor = codec.createCompressor(); LOG.info("Got brand-new compressor 
> ["+codec.getDefaultExtension()+"]"); }
> else {
> compressor.reinit(conf);   //conf is null here
> ..
>  
> Please also refer to my unit test to reproduce the bug. 
> To address this bug, I modified the code to ensure that the configuration is 
> read back from the codec when a compressor is reused.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846110#comment-17846110
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

xuzifu666 commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-2109009003

   > @xuzifu666 before I merge, what name do you want to be credited with in 
the patch...your github account doesn't have one.
   
   Thanks for your review, do you mean my real name? My name can be edit as 
xuyu ~@steveloughran 




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846102#comment-17846102
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

hadoop-yetus commented on PR #6646:
URL: https://github.com/apache/hadoop/pull/6646#issuecomment-2108903980

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 5 new + 2 unchanged - 0 fixed 
= 7 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 56s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 129m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6646 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5cd162906b33 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b195cd477638494ab85e6925c02eea628d0b18b0 |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646/3/testReport/ |
   | Max. process+thread count | 699 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6646

[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846074#comment-17846074
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

hadoop-yetus commented on PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#issuecomment-2108804028

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   8m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   8m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  javac  |   8m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 35s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6739/9/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 1 new + 131 
unchanged - 0 fixed = 132 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 29s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 137m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6739/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6739 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 0da00eeb249a 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 20b511efc294d7d76b8ec6f3ae533a65e200d0ab |
   | Default Java | Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6739/9/testReport/ |
   | Max. process+thread count | 1276 (vs. ulimit of 5500) |
   | modules | C: hadoop-common

[jira] [Commented] (HADOOP-19154) upgrade bouncy castle to 1.78.1 due to CVEs

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846055#comment-17846055
 ] 

ASF GitHub Bot commented on HADOOP-19154:
-

vinayakumarb commented on PR #6755:
URL: https://github.com/apache/hadoop/pull/6755#issuecomment-2108689940

   > This seems to have gone in as #6811.
   > 
   > I'm going to propose rolling back #6811 and merging this one instead as it 
has a jira ID, goes to a later version and updates the LICENSE file
   
   Apologies for pushing it early. Thanks @pjfanning for addressing 
LICENCE-binary issue.




> upgrade bouncy castle to 1.78.1 due to CVEs
> ---
>
> Key: HADOOP-19154
> URL: https://issues.apache.org/jira/browse/HADOOP-19154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> [https://www.bouncycastle.org/releasenotes.html#r1rv78]
> There is a v1.78.1 release but no notes for it yet.
> For v1.78
> h3. 2.1.5 Security Advisories.
> Release 1.78 deals with the following CVEs:
>  * CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
> parameters can cause high CPU usage during parameter evaluation.
>  * CVE-2024-30171 - Possible timing based leakage in RSA based handshakes due 
> to exception processing eliminated.
>  * CVE-2024-30172 - Crafted signature and public key can be used to trigger 
> an infinite loop in the Ed25519 verification code.
>  * CVE-2024-301XX - When endpoint identification is enabled and an SSL socket 
> is not created with an explicit hostname (as happens with 
> HttpsURLConnection), hostname verification could be performed against a 
> DNS-resolved IP address. This has been fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18679) Add API for bulk/paged object deletion

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846052#comment-17846052
 ] 

ASF GitHub Bot commented on HADOOP-18679:
-

steveloughran commented on PR #6726:
URL: https://github.com/apache/hadoop/pull/6726#issuecomment-2108677588

   mukund, if you can do those naming changes then I'm +1




> Add API for bulk/paged object deletion
> --
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> iceberg and hbase could benefit from being able to give a list of individual 
> files to delete -files which may be scattered round the bucket for better 
> read peformance. 
> Add some new optional interface for an object store which allows a caller to 
> submit a list of paths to files to delete, where
> the expectation is
> * if a path is a file: delete
> * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit, 
> without any probes first.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846044#comment-17846044
 ] 

ASF GitHub Bot commented on HADOOP-18786:
-

ctubbsii commented on PR #5789:
URL: https://github.com/apache/hadoop/pull/5789#issuecomment-2108580048

   > The main thing I want to be sure is from this build, what gets into the 
distro? only stuff from the maven repo, right? that is: this PR MUST NOT force 
updates in the binaries we ship.
   
   I don't quite understand the question. The premise seems to be that the 
current build is only grabbing stuff from the Maven repo. However, that's not 
true currently, and that's part of the problem. The build currently grabs stuff 
from the archives, and not just from the Maven repo. Those are the URLs that 
this PR changes... to use the ASF CDN instead of the archives. The only change 
that might affect the distro is a couple of tools do not have that version 
available in the CDN anymore, so a version bump was necessary to be able to 
grab it from the CDN instead of from the archives. However, I don't know if 
those affect the binaries in the distro either, or if those are only used as 
unshipped build tools. But even if it does change the binaries in some way, the 
current situation of automatically going to the ASF archives cannot continue... 
it makes offline builds very hard, and the download of things from the archives 
causes frequent builds to trigger automated bans of ASF services, because the 
archives aren't meant to be used this way (for routine builds).




> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19152) Do not hard code security providers.

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846039#comment-17846039
 ] 

ASF GitHub Bot commented on HADOOP-19152:
-

szetszwo commented on code in PR #6739:
URL: https://github.com/apache/hadoop/pull/6739#discussion_r1598897001


##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/TestCryptoUtils.java:
##
@@ -0,0 +1,86 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.crypto;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.GenericTestUtils;
+import org.bouncycastle.jce.provider.BouncyCastleProvider;
+import org.junit.Assert;
+import org.junit.Test;
+import org.slf4j.event.Level;
+
+import java.security.Provider;
+import java.security.Security;
+
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_CRYPTO_JCE_PROVIDER_AUTO_ADD_DEFAULT;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_CRYPTO_JCE_PROVIDER_AUTO_ADD_KEY;
+import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.HADOOP_SECURITY_CRYPTO_JCE_PROVIDER_KEY;
+
+/** Test {@link CryptoUtils}. */
+public class TestCryptoUtils {

Review Comment:
   The default timeout 100s is too long for this.  The other things may not be 
useful here since this is just a very simple test.





> Do not hard code security providers.
> 
>
> Key: HADOOP-19152
> URL: https://issues.apache.org/jira/browse/HADOOP-19152
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Tsz-wo Sze
>Assignee: Tsz-wo Sze
>Priority: Major
>  Labels: pull-request-available
>
> In order to support different security providers in different clusters, we 
> should not hard code a provider in our code.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846026#comment-17846026
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

steveloughran commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-2108366767

   @xuzifu666 before I merge, what name do you want to be credited with in the 
patch...your github account doesn't have one.




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19073) WASB: Fix connection leak in FolderRenamePending

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846025#comment-17846025
 ] 

ASF GitHub Bot commented on HADOOP-19073:
-

steveloughran commented on PR #6534:
URL: https://github.com/apache/hadoop/pull/6534#issuecomment-2108355675

   I've been away for a week and am catching up, so not tested yet, sorry...




> WASB: Fix connection leak in FolderRenamePending
> 
>
> Key: HADOOP-19073
> URL: https://issues.apache.org/jira/browse/HADOOP-19073
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.6
>Reporter: xy
>Priority: Major
>  Labels: pull-request-available
>
> Fix connection leak in FolderRenamePending in getting bytes  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18786) Hadoop build depends on archives.apache.org

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846024#comment-17846024
 ] 

ASF GitHub Bot commented on HADOOP-18786:
-

steveloughran commented on PR #5789:
URL: https://github.com/apache/hadoop/pull/5789#issuecomment-2108352558

   The main thing I want to be sure is from this build, what gets into the 
distro? only stuff from the maven repo, right? that is: this PR MUST NOT force 
updates in the binaries we ship.




> Hadoop build depends on archives.apache.org
> ---
>
> Key: HADOOP-18786
> URL: https://issues.apache.org/jira/browse/HADOOP-18786
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Christopher Tubbs
>Priority: Critical
>  Labels: pull-request-available
>
> Several times throughout Hadoop's source, the ASF archive is referenced, 
> including part of the build that downloads Yetus.
> Building a release from source should not require access to the ASF archives, 
> as that contributes to end users being subject to throttling and blocking by 
> INFRA, for "abuse" of the archives, even though they are merely building a 
> current ASF release from source. This is particularly problematic for 
> downstream packagers who must build from Hadoop's source, or for CI/CD 
> situations that depend on Hadoop's source, and particularly problematic for 
> those end users behind a NAT gateway, because even if Hadoop's use of the 
> archive is modest, it adds up for multiple users.
> The build should be modified, so that it does not require access to fixed 
> versions in the archives (or should work with the upstream of those dependent 
> projects to publish their releases elsewhere, for routine consumptions). In 
> the interim, the source could be updated to point to the current dependency 
> versions available on downloads.apache.org.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19161) S3A: option "fs.s3a.performance.flags" to take list of performance flags

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846023#comment-17846023
 ] 

ASF GitHub Bot commented on HADOOP-19161:
-

steveloughran commented on PR #6789:
URL: https://github.com/apache/hadoop/pull/6789#issuecomment-2108338924

   I have a better design for this. changign this to draft.
   
   Proposed: we have a `Configuration.getEnumOptions(Enum x, boolean 
failIfUnknown)` which returns an EnumSet of all values of the enum class whose 
valueOf() matches an entry in the CSV list (with some mapping such as case 
conversion, and map - and . to "_". 
   
   this makes it trivial to reuse/process. the implementation would be outside 
the actual Configuration class to make it easy for AbfsConfiguration to use too




> S3A: option "fs.s3a.performance.flags" to take list of performance flags
> 
>
> Key: HADOOP-19161
> URL: https://issues.apache.org/jira/browse/HADOOP-19161
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> HADOOP-19072 shows we want to add more optimisations than that of 
> HADOOP-18930.
> * Extending the new optimisations to the existing option is brittle
> * Adding explicit options for each feature gets complext fast.
> Proposed
> * A new class S3APerformanceFlags keeps all the flags
> * it build this from a string[] of values, which can be extracted from 
> getConf(),
> * and it can also support a "*" option to mean "everything"
> * this class can also be handed off to hasPathCapability() and do the right 
> thing.
> Proposed optimisations
> * create file (we will hook up HADOOP-18930)
> * mkdir (HADOOP-19072)
> * delete (probe for parent path)
> * rename (probe for source path)
> We could think of more, with different names, later.
> The goal is make it possible to strip out every HTTP request we do for 
> safety/posix compliance, so applications have the option of turning off what 
> they don't need.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846022#comment-17846022
 ] 

Wei-Chiu Chuang edited comment on HADOOP-19170 at 5/13/24 5:25 PM:
---

[~zhengchenyu] thanks for reporting the issue. Can you add your system details? 
e.g. Mac OS version.
We don't test Mac as a first class supported OS, but IIRC it was working a few 
years ago.

Looks like HADOOP-17569 may have broken it.


was (Author: jojochuang):
[~zhengchenyu] thanks for reporting the issue. Can you add your system details? 
e.g. Mac OS version.
We don't test Mac as a first class supported OS, but IIRC it was working a few 
years ago.

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846022#comment-17846022
 ] 

Wei-Chiu Chuang edited comment on HADOOP-19170 at 5/13/24 5:23 PM:
---

[~zhengchenyu] thanks for reporting the issue. Can you add your system details? 
e.g. Mac OS version.
We don't test Mac as a first class supported OS, but IIRC it was working a few 
years ago.


was (Author: jojochuang):
[~zhengchenyu] thanks for reporting the issue. Can you add your system details? 
e.g. Mac OS version

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846022#comment-17846022
 ] 

Wei-Chiu Chuang commented on HADOOP-19170:
--

[~zhengchenyu] thanks for reporting the issue. Can you add your system details? 
e.g. Mac OS version

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19170) Fixes compilation issues on Mac

2024-05-13 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-19170:
-
Summary: Fixes compilation issues on Mac  (was: Fixes compilation issues on 
non-Linux systems)

> Fixes compilation issues on Mac
> ---
>
> Key: HADOOP-19170
> URL: https://issues.apache.org/jira/browse/HADOOP-19170
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chenyu Zheng
>Assignee: Chenyu Zheng
>Priority: Major
>  Labels: pull-request-available
>
> When I build hadoop-common native in Mac OS, I found this error:
> {code:java}
> /x/hadoop/hadoop-common-project/hadoop-common/src/main/native/src/exception.c:114:50:
>  error: function-like macro '__GLIBC_PREREQ' is not defined
> #if defined(__sun) || defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 32) {code}
> The reason is that Mac OS does not support glibc. And C conditional 
> compilation requires validation of all expressions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19165) Explore dropping protobuf 2.5.0 from the distro

2024-05-13 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846021#comment-17846021
 ] 

Steve Loughran commented on HADOOP-19165:
-

relates to HADOOP-18487, where I tried to do most of this, but still couidn't 
stop it cropping up in yarn. 

> Explore dropping protobuf 2.5.0 from the distro
> ---
>
> Key: HADOOP-19165
> URL: https://issues.apache.org/jira/browse/HADOOP-19165
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Priority: Major
>
> explore if protobuf-2.5.0 can be dropped from distro, it is a transitive 
> dependency from HBase, but HBase doesn't use it in the code.
> Check if it is the only one pulling it into the distro & will something break 
> if we exclude that, if none lets get rid of it



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19013) fs.getXattrs(path) for S3FS doesn't have x-amz-server-side-encryption-aws-kms-key-id header.

2024-05-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846016#comment-17846016
 ] 

ASF GitHub Bot commented on HADOOP-19013:
-

steveloughran commented on code in PR #6646:
URL: https://github.com/apache/hadoop/pull/6646#discussion_r1598785291


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/EncryptionTestUtils.java:
##
@@ -19,7 +19,13 @@
 package org.apache.hadoop.fs.s3a;
 
 import java.io.IOException;
+import java.nio.charset.StandardCharsets;
+import java.util.Map;
+import java.util.Optional;
 
+import org.apache.hadoop.fs.s3a.impl.HeaderProcessing;
+import org.apache.hadoop.io.IOUtils;

Review Comment:
   move to the org.apache block





> fs.getXattrs(path) for S3FS doesn't have 
> x-amz-server-side-encryption-aws-kms-key-id header.
> 
>
> Key: HADOOP-19013
> URL: https://issues.apache.org/jira/browse/HADOOP-19013
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>
> Once a path while uploading has been encrypted with SSE-KMS with a key id and 
> then later when we try to read the attributes of the same file, it doesn't 
> contain the key id information as an attribute. should we add it?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



<    3   4   5   6   7   8   9   10   11   12   >