[jira] [Commented] (HADOOP-19098) Vector IO: consistent specified rejection of overlapping ranges

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824907#comment-17824907
 ] 

ASF GitHub Bot commented on HADOOP-19098:
-

hadoop-yetus commented on PR #6604:
URL: https://github.com/apache/hadoop/pull/6604#issuecomment-1986741848

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 39s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  33m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 53s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  15m 55s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 18s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 10 new + 58 unchanged - 0 fixed = 68 total (was 
58)  |
   | +1 :green_heart: |  mvnsite  |   5m  5s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 14s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | +1 :green_heart: |  javadoc  |   4m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  19m 35s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  | 272m  1s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 30s |  |  

Re: [PR] HADOOP-19098 Vector IO: consistent specified rejection of overlapping ranges [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6604:
URL: https://github.com/apache/hadoop/pull/6604#issuecomment-1986741848

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 47s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   4m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   2m 39s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  33m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 53s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 32s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |  16m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  15m 55s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   4m 18s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 10 new + 58 unchanged - 0 fixed = 68 total (was 
58)  |
   | +1 :green_heart: |  mvnsite  |   5m  5s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m 14s | 
[/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-common in the patch failed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.  |
   | +1 :green_heart: |  javadoc  |   4m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  33m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  19m 35s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6604/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  | 272m  1s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 30s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 40s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 545m  4s 

Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-03-08 Thread via GitHub


haiyang1987 commented on code in PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#discussion_r1518480694


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/StripeReader.java:
##
@@ -233,41 +235,62 @@ private ByteBufferStrategy[] 
getReadStrategies(StripingChunk chunk) {
 
   private int readToBuffer(BlockReader blockReader,
   DatanodeInfo currentNode, ByteBufferStrategy strategy,
-  ExtendedBlock currentBlock) throws IOException {
+  LocatedBlock currentBlock, int chunkIndex) throws IOException {
 final int targetLength = strategy.getTargetLength();
-int length = 0;
-try {
-  while (length < targetLength) {
-int ret = strategy.readFromBlock(blockReader);
-if (ret < 0) {
-  throw new IOException("Unexpected EOS from the reader");
+int curAttempts = 0;
+while (curAttempts < readDNMaxAttempts) {
+  curAttempts++;
+  int length = 0;
+  try {
+while (length < targetLength) {
+  int ret = strategy.readFromBlock(blockReader);
+  if (ret < 0) {
+throw new IOException("Unexpected EOS from the reader");
+  }
+  length += ret;
+}
+return length;
+  } catch (ChecksumException ce) {
+DFSClient.LOG.warn("Found Checksum error for "
++ currentBlock + " from " + currentNode
++ " at " + ce.getPos());
+//Clear buffer to make next decode success
+strategy.getReadBuffer().clear();
+// we want to remember which block replicas we have tried
+corruptedBlocks.addCorruptedBlock(currentBlock.getBlock(), 
currentNode);
+throw ce;
+  } catch (IOException e) {
+//Clear buffer to make next decode success
+strategy.getReadBuffer().clear();
+if (curAttempts < readDNMaxAttempts) {
+  if (readerInfos[chunkIndex].reader != null) {
+readerInfos[chunkIndex].reader.close();
+  }
+  if (dfsStripedInputStream.createBlockReader(currentBlock,
+  alignedStripe.getOffsetInBlock(), targetBlocks,
+  readerInfos, chunkIndex, readTo)) {
+blockReader = readerInfos[chunkIndex].reader;
+String msg = "Reconnect to " + currentNode.getInfoAddr()
++ " for block " + currentBlock.getBlock();
+DFSClient.LOG.warn(msg);
+continue;
+  }
+  DFSClient.LOG.warn("Exception while reading from "

Review Comment:
   Line[278-281] will  move` if (curAttempts < readDNMaxAttempts) {` outside



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19088) upgrade to jersey-json 1.22.0

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824888#comment-17824888
 ] 

ASF GitHub Bot commented on HADOOP-19088:
-

pjfanning commented on PR #6585:
URL: https://github.com/apache/hadoop/pull/6585#issuecomment-1986599644

   @steveloughran the most recent run only failed with the tests that failed 
due to protobuf 3.21 (and these are now fixed by a different PR). Is it ok to 
get this merged to trunk?




> upgrade to jersey-json 1.22.0
> -
>
> Key: HADOOP-19088
> URL: https://issues.apache.org/jira/browse/HADOOP-19088
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> Tidies up support for Jettison and Jackson versions used by Hadoop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19088. Use jersey-json 1.22.0 [hadoop]

2024-03-08 Thread via GitHub


pjfanning commented on PR #6585:
URL: https://github.com/apache/hadoop/pull/6585#issuecomment-1986599644

   @steveloughran the most recent run only failed with the tests that failed 
due to protobuf 3.21 (and these are now fixed by a different PR). Is it ok to 
get this merged to trunk?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11656 RMStateStore event queue blocked [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6569:
URL: https://github.com/apache/hadoop/pull/6569#issuecomment-1986567474

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 47s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   8m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   7m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  43m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   9m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   9m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 55s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6569/9/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 14 new + 95 unchanged 
- 3 fixed = 109 total (was 98)  |
   | +1 :green_heart: |  mvnsite  |   1m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 56s | 
[/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6569/9/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  40m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 37s |  |  hadoop-yarn-common in the patch 
passed.  |
   | -1 :x: |  unit  | 104m 43s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6569/9/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 328m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common |
   |  |  Bad attempt to compute absolute value of signed 32-bit hashcode in 
org.apache.hadoop.yarn.event.multidispatcher.MultiDispatcherExecutor.execute(Event,
 Runnable)  At MultiDispatcherExecutor.java:value of signed 32-bit hashcode in 
org.apache.hadoop.yarn.event.multidispatcher.MultiDispatcherExecutor.execute(Event,
 Runnable)  At MultiDispatcherExecutor.java:[line 60] |
   |  |  new 
org.apache.hadoop.yarn.event.multidispatcher.MultiDispatcherExecutor(Logger, 
MultiDispatcherConfig, String) invokes 

[jira] [Commented] (HADOOP-19090) Update Protocol Buffers installation to 3.23.4

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824865#comment-17824865
 ] 

ASF GitHub Bot commented on HADOOP-19090:
-

pjfanning commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1986438599

   > are we up to date with other dependencies for a new thirdparty release? we 
should really bump them all up and then rebuild it. this isn't targeting 3.4.0 
so not a blocker there
   
   Guava has a slightly newer release 33.0.0-jre. Is this upgrade needed though?




> Update Protocol Buffers installation to 3.23.4
> --
>
> Key: HADOOP-19090
> URL: https://issues.apache.org/jira/browse/HADOOP-19090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We are seeing issues with Java 8 usage of protobuf-java
> See https://issues.apache.org/jira/browse/HADOOP-18197 and comments about
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19090. Use protobuf-java 3.23.4. [hadoop]

2024-03-08 Thread via GitHub


pjfanning commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1986438599

   > are we up to date with other dependencies for a new thirdparty release? we 
should really bump them all up and then rebuild it. this isn't targeting 3.4.0 
so not a blocker there
   
   Guava has a slightly newer release 33.0.0-jre. Is this upgrade needed though?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19090) Update Protocol Buffers installation to 3.23.4

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824864#comment-17824864
 ] 

ASF GitHub Bot commented on HADOOP-19090:
-

steveloughran commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1986430725

   are we up to date with other dependencies for a new thirdparty release? we 
should really bump them all up and then rebuild it. 
   this isn't targeting 3.4.0 so not a blocker there




> Update Protocol Buffers installation to 3.23.4
> --
>
> Key: HADOOP-19090
> URL: https://issues.apache.org/jira/browse/HADOOP-19090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We are seeing issues with Java 8 usage of protobuf-java
> See https://issues.apache.org/jira/browse/HADOOP-18197 and comments about
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19090. Use protobuf-java 3.23.4. [hadoop]

2024-03-08 Thread via GitHub


steveloughran commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1986430725

   are we up to date with other dependencies for a new thirdparty release? we 
should really bump them all up and then rebuild it. 
   this isn't targeting 3.4.0 so not a blocker there


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19091) Add support for Tez to MagicS3GuardCommitter

2024-03-08 Thread Venkatasubrahmanian Narayanan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824851#comment-17824851
 ] 

Venkatasubrahmanian Narayanan commented on HADOOP-19091:


[~srahman] Yes, by setting fs.s3a.committer.uuid and having the magic committer 
pick that up, I was able to run my Hive test case without needing to modify 
Hadoop (3.3.3). However, I still intend to put up a Hadoop PR with my MRv1 
wrapper of MagicS3GuardCommitter (it's implemented similarly to the MRv1 
FileOutputCommitter where it just delegates the calls to the existing MRv2 
version), and I will need to make one minor change to the existing MRv2 
MagicS3GuardCommitter - I need to add a constructor that takes a JobContext 
since MRv1 types require it.

> Add support for Tez to MagicS3GuardCommitter
> 
>
> Key: HADOOP-19091
> URL: https://issues.apache.org/jira/browse/HADOOP-19091
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
> Environment: Pig 17/Hive 3.1.3 with Hadoop 3.3.3 on AWS EMR 6-12.0
>Reporter: Venkatasubrahmanian Narayanan
>Assignee: Venkatasubrahmanian Narayanan
>Priority: Major
> Attachments: 0001-AWS-Hive-Changes.patch, 
> 0002-HIVE-27698-Backport-of-HIVE-22398-Remove-legacy-code.patch, 
> HADOOP-19091-HIVE-WIP.patch
>
>
> The MagicS3GuardCommitter assumes that the JobID of the task is the same as 
> that of the job's application master when writing/reading the .pendingset 
> file. This assumption is not valid when running with Tez, which creates 
> slightly different JobIDs for tasks and the application master.
>  
> While the MagicS3GuardCommitter is intended only for MRv2, it mostly works 
> fine with an MRv1 wrapper with Hive/Pig (with some minor changes to Hive) run 
> in MR mode. This issue only crops up when running queries with the Tez 
> execution engine. I can upload a patch to Hive 3.1 to reproduce this error on 
> EMR if needed.
>  
> Fixing this will probably require work from both Tez and Hadoop, wanted to 
> start a discussion here so we can figure out how exactly we go about this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18088) Replace log4j 1.x with reload4j

2024-03-08 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-18088:
--
Fix Version/s: (was: 3.5.0)

> Replace log4j 1.x with reload4j
> ---
>
> Key: HADOOP-18088
> URL: https://issues.apache.org/jira/browse/HADOOP-18088
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> As proposed in the dev mailing list 
> (https://lists.apache.org/thread/fdzkv80mzkf3w74z9120l0k0rc3v7kqk) let's 
> replace log4j 1 with reload4j in the maintenance releases (i.e. 3.3.x, 3.2.x 
> and 2.10.x)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18088) Replace log4j 1.x with reload4j

2024-03-08 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-18088:
--
Fix Version/s: (was: 3.4.1)

> Replace log4j 1.x with reload4j
> ---
>
> Key: HADOOP-18088
> URL: https://issues.apache.org/jira/browse/HADOOP-18088
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 2.10.2, 3.2.4, 3.3.3, 3.5.0
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>
> As proposed in the dev mailing list 
> (https://lists.apache.org/thread/fdzkv80mzkf3w74z9120l0k0rc3v7kqk) let's 
> replace log4j 1 with reload4j in the maintenance releases (i.e. 3.3.x, 3.2.x 
> and 2.10.x)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19105) S3A: Recover from Vector IO read failures

2024-03-08 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19105:
---

 Summary: S3A: Recover from Vector IO read failures
 Key: HADOOP-19105
 URL: https://issues.apache.org/jira/browse/HADOOP-19105
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.6, 3.4.0
 Environment: s3a vector IO doesn't try to recover from read failures 
the way read() does.

Need to
* abort HTTP stream if considered needed
* retry active read which failed
* but not those which had succeeded


Reporter: Steve Loughran






--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19090) Update Protocol Buffers installation to 3.23.4

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824792#comment-17824792
 ] 

ASF GitHub Bot commented on HADOOP-19090:
-

pjfanning commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1985710340

   @steveloughran this PR depends on a snapshot version of hadoop-thirdparty. 
Can we do a release of hadoop-thirdparty and then backport the uptake of the 
future hadoop-thirdparty release to the 3.4 branch?




> Update Protocol Buffers installation to 3.23.4
> --
>
> Key: HADOOP-19090
> URL: https://issues.apache.org/jira/browse/HADOOP-19090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We are seeing issues with Java 8 usage of protobuf-java
> See https://issues.apache.org/jira/browse/HADOOP-18197 and comments about
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19090. Use protobuf-java 3.23.4. [hadoop]

2024-03-08 Thread via GitHub


pjfanning commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1985710340

   @steveloughran this PR depends on a snapshot version of hadoop-thirdparty. 
Can we do a release of hadoop-thirdparty and then backport the uptake of the 
future hadoop-thirdparty release to the 3.4 branch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#issuecomment-1985706560

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  38m 19s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 142m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/12/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6314 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 4921032baf30 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7c933fdd96daa4396d1af32f68dc1330d40f6d88 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/12/testReport/ |
   | Max. process+thread count | 590 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/12/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git 

[jira] [Commented] (HADOOP-19101) Vectored Read into off-heap buffer broken in fallback implementation

2024-03-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824782#comment-17824782
 ] 

Steve Loughran commented on HADOOP-19101:
-

* Hive Tez is also safe
* Hive LLAP is exposed as it reads into off-heap buffers

> Vectored Read into off-heap buffer broken in fallback implementation
> 
>
> Key: HADOOP-19101
> URL: https://issues.apache.org/jira/browse/HADOOP-19101
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/azure
>Affects Versions: 3.4.0, 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> {{VectoredReadUtils.readInDirectBuffer()}} always starts off reading at 
> position zero even when the range is at a different offset. As a result: you 
> can get incorrect information.
> Thanks for this is straightforward: we pass in a FileRange and use its offset 
> as the starting position.
> However, this does mean that all shipping releases 3.3.5-3.4.0 cannot safely 
> read vectorIO into direct buffers through HDFS, ABFS or GCS. Note that we 
> have never seen this in production because the parquet and ORC libraries both 
> read into on-heap storage.
> Those libraries needs to be audited to make sure that they never attempt to 
> read into off-heap DirectBuffers. This is a bit trickier than you would think 
> because an allocator is passed in. For PARQUET-2171 we will 
> * only invoke the API on streams which explicitly declare their support for 
> the API (so fallback in parquet itself)
> * not invoke when direct buffer allocation is in use.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-19043) S3A: Regression: ITestS3AOpenCost fails on prefetch test runs

2024-03-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-19043.
-
Fix Version/s: 3.5.0
   Resolution: Fixed

> S3A: Regression: ITestS3AOpenCost fails on prefetch test runs
> -
>
> Key: HADOOP-19043
> URL: https://issues.apache.org/jira/browse/HADOOP-19043
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Getting test failures in the new ITestS3AOpenCost tests when run with 
> {{-Dprefetch}}
> Thought I'd tested this, but clearly not
> * class cast failures on asserts (fix: skip)
> * bytes read different in one test: (fix: identify and address)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19090) Update Protocol Buffers installation to 3.23.4

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824778#comment-17824778
 ] 

ASF GitHub Bot commented on HADOOP-19090:
-

steveloughran commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1985647257

   we ready to get this into branch-3.4?




> Update Protocol Buffers installation to 3.23.4
> --
>
> Key: HADOOP-19090
> URL: https://issues.apache.org/jira/browse/HADOOP-19090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We are seeing issues with Java 8 usage of protobuf-java
> See https://issues.apache.org/jira/browse/HADOOP-18197 and comments about
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19090. Use protobuf-java 3.23.4. [hadoop]

2024-03-08 Thread via GitHub


steveloughran commented on PR #6593:
URL: https://github.com/apache/hadoop/pull/6593#issuecomment-1985647257

   we ready to get this into branch-3.4?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19043) S3A: Regression: ITestS3AOpenCost fails on prefetch test runs

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824776#comment-17824776
 ] 

ASF GitHub Bot commented on HADOOP-19043:
-

steveloughran merged PR #6465:
URL: https://github.com/apache/hadoop/pull/6465




> S3A: Regression: ITestS3AOpenCost fails on prefetch test runs
> -
>
> Key: HADOOP-19043
> URL: https://issues.apache.org/jira/browse/HADOOP-19043
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> Getting test failures in the new ITestS3AOpenCost tests when run with 
> {{-Dprefetch}}
> Thought I'd tested this, but clearly not
> * class cast failures on asserts (fix: skip)
> * bytes read different in one test: (fix: identify and address)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19043. S3A: Regression: ITestS3AOpenCost fails on prefetch test runs [hadoop]

2024-03-08 Thread via GitHub


steveloughran merged PR #6465:
URL: https://github.com/apache/hadoop/pull/6465


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824775#comment-17824775
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

hadoop-yetus commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1985634498

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 46s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6544 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 799115379a47 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4247a5e4b9e6923b9b98922decfd5c7904a2aa28 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/3/testReport/ |
   | Max. process+thread count | 615 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1985634498

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 46s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6544 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 799115379a47 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4247a5e4b9e6923b9b98922decfd5c7904a2aa28 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/3/testReport/ |
   | Max. process+thread count | 615 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6544/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries 

[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824761#comment-17824761
 ] 

ASF GitHub Bot commented on HADOOP-19102:
-

hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985565656

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 3 unchanged - 8 
fixed = 4 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 62b9990fbb8e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a1491a7b1b61778c9433abbaa891d871d6b2f78 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/console |
   | versions | 

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985565656

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 3 unchanged - 8 
fixed = 4 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 25s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 143m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 62b9990fbb8e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0a1491a7b1b61778c9433abbaa891d871d6b2f78 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the 

Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-03-08 Thread via GitHub


Neilxzn commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1985490862

   这是自动回复邮件。来件已接收,谢谢。


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2024-03-08 Thread via GitHub


haiyang1987 commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1985490125

   Hi @Neilxzn Any progress here? Thanks.
   
   this PR is still necessary, there are some similar problems in our 
environment~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824722#comment-17824722
 ] 

ASF GitHub Bot commented on HADOOP-19102:
-

saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985460270

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 574, Failures: 0, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=54).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 581, Failures: 1, Errors: 2, Skipped: 32
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=260).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 579, Failures: 1, Errors: 1, Skipped: 267
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 584, Failures: 0, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 46 mins 15 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit 0a1491a7b1b61778c9433abbaa891d871d6b2f78 (HEAD -> 
saxenapranav/footerBufferSizeFix, origin/saxenapranav/footerBufferSizeFix)
   Author: Pranav Saxena <>
   Date:   Fri Mar 8 01:30:38 2024 -0800
   
   set and unset executorservice; magic num for 256 KB




> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0
>
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985460270

   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 574, Failures: 0, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=54).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 581, Failures: 1, Errors: 2, Skipped: 32
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=260).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 579, Failures: 1, Errors: 1, Skipped: 267
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 584, Failures: 0, Errors: 1, Skipped: 79
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 46 mins 15 secs.
   azureuser@Hadoop-VM-EAST2:~/hadoop/hadoop-tools/hadoop-azure$ git log
   commit 0a1491a7b1b61778c9433abbaa891d871d6b2f78 (HEAD -> 
saxenapranav/footerBufferSizeFix, origin/saxenapranav/footerBufferSizeFix)
   Author: Pranav Saxena <>
   Date:   Fri Mar 8 01:30:38 2024 -0800
   
   set and unset executorservice; magic num for 256 KB


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824719#comment-17824719
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1985438934

   @steveloughran - I ran the unit tests and integration tests again after 
rebasing (ITests against `us-west-2`).
   
   No unit test failures, 2 ITest failures:
   * ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 
Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, 
but got the result: : 
FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202403080217_0001};
 taskId=attempt_202403080217_0001_m_00_0, status=''}; 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@3cfe1d73}; 
outputPath=s3a://ahemani-s3a-testing/fork-0001/test/testEverything, 
workPath=s3a://ahemani-s3a-testing/fork-0001/test/testEverything/_temporary/1/_temporary/attempt_202403080217_0001_m_00_0,
 algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}
   * 
ITestS3ACannedACLs>AbstractS3ATestBase.setup:111->AbstractFSContractTestBase.setup:205->AbstractFSContractTestBase.mkdirs:363
 » AWSBadRequest
   
   I should probably figure out how to skip the ACL test in the future, so 
really should just be one failure. Please advise if this test is something I 
should look into.
   
   Code is pushed to this PR and ready for another review - thanks!




> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-08 Thread via GitHub


[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824693#comment-17824693
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1985378218

   Hi Steve, thanks for reviewing this as you finished up with the Hadoop 
release work, I really appreciate you taking the time to look at this. I'm 
re-running the integration and unit tests after I made the changes you 
suggested above - I will ping this thread when the code is pushed and ready for 
a re-review. Responding to your earlier questions:
   
   > I'm curious about whether it is possible to test this with an ITest 
everywhere -and what happens? would it be possible to write a test for this?
   
   While it should be technically possible to do so, I'm not sure there's an 
easy (or acceptable) way to do so. In order to test this functionality with an 
ITest, we would need to set up an Access Grants instance, locations, and access 
grants themselves prior to actually running the client-side code, which we are 
integrating. Creating and tearing down those resources would need a good deal 
of effort imo - and do not come for free cost-wise either. Going back to the 
note we've made explicitly in the documentation, if there is a problem with 
functionality, this would be a general problem with the S3 Access Grants plugin 
- and not with S3A, as our unit test is capturing all code used for enabling 
this plugin. I believe the ITests on the S3 Access Grants plugin should then be 
sufficient for covering the full testing coverage.
   
   > now, how does this integrate with the usual auth mechanism? the standard 
credentials passed in are used to auth with (something? what?) to get the 
session credentials.
   
   Good question - when using S3 Access Grants, the flow would be to use your 
standard credentials to authenticate yourself to the S3 Access Grants server. 
Which would then hand you back a new set of credentials once you are authorized 
- this set of credentials should have access to the S3 objects you are looking 
to access. The Access Grants on the S3 Access Grants server are set by your 
account/organization's data admin, who is in charge of determining what sets of 
permissions each user should be able to receive. (I'm not sure if I'm answering 
the right question here - please feel free to elaborate on the question if I 
didn't answer it properly).
   
   > How are complex ops like rename() coped with?
   
   By `rename`, I'm assuming you mean the mechanism where we copy the object to 
the new name and then deleting the original object? If so, CopyObject is not 
supported at this moment but the S3 Access Grants team is working on a new 
revision of the plugin where they are able to support this in more constrained 
situations - such as when the object source and destination fall under the same 
S3 Access Grants prefix. Similar situation for DeleteObjects calls.
   
   I don't believe there's a S3 API specifically for `rename` - please correct 
me if I'm wrong.
   
   If there are any other complex/not-so-popular S3 actions that the user 
attempts to do, the S3 Access Grants plugin will not attempt to do credential 
vending and instead always instruct the S3 Client to fall back to using the 
user's provided authentication credentials.
   
   > When do things fail and how? specifically, are new errors likely to be 
raised when S3 requests are made, and if so is their translation valid?
   
   So given that, functionally, the only thing we are changing is the 
credentials which the user is using to access the S3 data, there should not be 
any changes to the set of errors that can be returned from S3 while attempting 
to access the data itself. However, there could be an error thrown while the 
user is attempting to authorize against the S3 Access Grants server. In this 
case, I believe the only exception the S3 Access Grants plugin is allowed to 
throw is a `SdkServiceException`. But per my understanding, this is simply 
passed onto the S3 client itself, which is fully in control of how to handle 
this error - and will likely send this error back to the user. Exactly which 
exception the user would receive back, I'm not completely sure but per my 
testing, it is (correctly) not causing the user to retry the credential vending 
process. I can research which error specifically this may be.




> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-08 Thread via GitHub


adnanhemani commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1985378218

   Hi Steve, thanks for reviewing this as you finished up with the Hadoop 
release work, I really appreciate you taking the time to look at this. I'm 
re-running the integration and unit tests after I made the changes you 
suggested above - I will ping this thread when the code is pushed and ready for 
a re-review. Responding to your earlier questions:
   
   > I'm curious about whether it is possible to test this with an ITest 
everywhere -and what happens? would it be possible to write a test for this?
   
   While it should be technically possible to do so, I'm not sure there's an 
easy (or acceptable) way to do so. In order to test this functionality with an 
ITest, we would need to set up an Access Grants instance, locations, and access 
grants themselves prior to actually running the client-side code, which we are 
integrating. Creating and tearing down those resources would need a good deal 
of effort imo - and do not come for free cost-wise either. Going back to the 
note we've made explicitly in the documentation, if there is a problem with 
functionality, this would be a general problem with the S3 Access Grants plugin 
- and not with S3A, as our unit test is capturing all code used for enabling 
this plugin. I believe the ITests on the S3 Access Grants plugin should then be 
sufficient for covering the full testing coverage.
   
   > now, how does this integrate with the usual auth mechanism? the standard 
credentials passed in are used to auth with (something? what?) to get the 
session credentials.
   
   Good question - when using S3 Access Grants, the flow would be to use your 
standard credentials to authenticate yourself to the S3 Access Grants server. 
Which would then hand you back a new set of credentials once you are authorized 
- this set of credentials should have access to the S3 objects you are looking 
to access. The Access Grants on the S3 Access Grants server are set by your 
account/organization's data admin, who is in charge of determining what sets of 
permissions each user should be able to receive. (I'm not sure if I'm answering 
the right question here - please feel free to elaborate on the question if I 
didn't answer it properly).
   
   > How are complex ops like rename() coped with?
   
   By `rename`, I'm assuming you mean the mechanism where we copy the object to 
the new name and then deleting the original object? If so, CopyObject is not 
supported at this moment but the S3 Access Grants team is working on a new 
revision of the plugin where they are able to support this in more constrained 
situations - such as when the object source and destination fall under the same 
S3 Access Grants prefix. Similar situation for DeleteObjects calls.
   
   I don't believe there's a S3 API specifically for `rename` - please correct 
me if I'm wrong.
   
   If there are any other complex/not-so-popular S3 actions that the user 
attempts to do, the S3 Access Grants plugin will not attempt to do credential 
vending and instead always instruct the S3 Client to fall back to using the 
user's provided authentication credentials.
   
   > When do things fail and how? specifically, are new errors likely to be 
raised when S3 requests are made, and if so is their translation valid?
   
   So given that, functionally, the only thing we are changing is the 
credentials which the user is using to access the S3 data, there should not be 
any changes to the set of errors that can be returned from S3 while attempting 
to access the data itself. However, there could be an error thrown while the 
user is attempting to authorize against the S3 Access Grants server. In this 
case, I believe the only exception the S3 Access Grants plugin is allowed to 
throw is a `SdkServiceException`. But per my understanding, this is simply 
passed onto the S3 client itself, which is fully in control of how to handle 
this error - and will likely send this error back to the user. Exactly which 
exception the user would receive back, I'm not completely sure but per my 
testing, it is (correctly) not causing the user to retry the credential vending 
process. I can research which error specifically this may be.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824664#comment-17824664
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1517420760


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;

Review Comment:
   Not really sure, honestly since all the code we are looking to test doesn't 
actually reside in that module. Or am I misunderstanding what you meant by 
"this" meaning the test class?





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-08 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1517420760


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;

Review Comment:
   Not really sure, honestly since all the code we are looking to test doesn't 
actually reside in that module. Or am I misunderstanding what you meant by 
"this" meaning the test class?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824660#comment-17824660
 ] 

ASF GitHub Bot commented on HADOOP-19102:
-

saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985271965

    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=59).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemE2E.testHttpReadTimeout »  Unexpected 
exception, expec...
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 585, Failures: 1, Errors: 3, Skipped: 71
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=20).
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=40).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 580, Failures: 2, Errors: 1, Skipped: 26
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=339).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 543, Failures: 1, Errors: 1, Skipped: 266
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 559, Failures: 0, Errors: 2, Skipped: 71
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 43 mins 38 secs.




> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0
>
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


saxenapranav commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985271965

    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=59).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemE2E.testHttpReadTimeout »  Unexpected 
exception, expec...
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 585, Failures: 1, Errors: 3, Skipped: 71
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testSkipBounds:218->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=20).
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=40).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 580, Failures: 2, Errors: 1, Skipped: 26
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO]
   [ERROR] Failures:
   [ERROR]   
ITestAzureBlobFileSystemRandomRead.testValidateSeekBounds:269->Assert.assertTrue:42->Assert.fail:89
 There should not be any network I/O (elapsedTimeMs=339).
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:329 » TestTimedOut 
test timed o...
   [INFO]
   [ERROR] Tests run: 543, Failures: 1, Errors: 1, Skipped: 266
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 340, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 2
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAzureBlobFileSystemLease.testAcquireRetry:336 » TestTimedOut 
test timed o...
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendWithInfiniteLeaseEnabled:186->twoWriters:154
 » TestTimedOut
   [INFO]
   [ERROR] Tests run: 559, Failures: 0, Errors: 2, Skipped: 71
   [INFO] Results:
   [INFO]
   [ERROR] Errors:
   [ERROR]   ITestAbfsTerasort.test_120_terasort:262->executeStage:206 » IO The 
ownership o...
   [INFO]
   [ERROR] Tests run: 340, Failures: 0, Errors: 1, Skipped: 55
   
   Time taken: 43 mins 38 secs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824654#comment-17824654
 ] 

ASF GitHub Bot commented on HADOOP-19102:
-

hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985254282

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 6 unchanged - 5 
fixed = 11 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f55f9b337083 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dbca78b72f6f24c3db5ac5553bcab229d86439db |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/testReport/ |
   | Max. process+thread count | 609 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/console |
   | versions | 

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-08 Thread via GitHub


hadoop-yetus commented on PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#issuecomment-1985254282

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 6 unchanged - 5 
fixed = 11 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 140m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6617 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f55f9b337083 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dbca78b72f6f24c3db5ac5553bcab229d86439db |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/testReport/ |
   | Max. process+thread count | 609 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6617/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the 

[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-08 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824650#comment-17824650
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1517361600


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;

Review Comment:
   I'm not too sure, given that all the code we are testing does not belong as 
part of the `fs.s3a.auth` module. Or did I misunderstand that by "this" you are 
not referring to the testing class itself?





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-08 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1517361600


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;

Review Comment:
   I'm not too sure, given that all the code we are testing does not belong as 
part of the `fs.s3a.auth` module. Or did I misunderstand that by "this" you are 
not referring to the testing class itself?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org