[GitHub] [hadoop] susheel-gupta commented on a diff in pull request #5278: YARN-11408. Add a check of autoQueueCreation is disabled for emitDefaultUserLimitFactor method

2023-01-19 Thread GitBox


susheel-gupta commented on code in PR #5278:
URL: https://github.com/apache/hadoop/pull/5278#discussion_r1071075406


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/fair/converter/TestFSConfigToCSConfigConverter.java:
##
@@ -194,6 +199,10 @@ public void testDefaultUserLimitFactor() throws Exception {
 
 assertNull("root.users user-limit-factor should be null",
 conf.get(PREFIX + "root.users." + USER_LIMIT_FACTOR));
+assertEquals("root.users auto-queue-creation-v2.enabled", "true",
+conf.get(PREFIX + "root.users.auto-queue-creation-v2.enabled"));
+assertNull( "root.users auto-create-child-queue.enabled should be null",
+conf.get(PREFIX + "root.users.auto-create-child-queue.enabled"));

Review Comment:
   In class TestFSQueueConverter, there is method  
testQueueWithNoAutoCreateChildQueue which checks assert 
.auto-create-child-queue.enabled to null  but according to above comment I need 
to add  a property of auto-create-child-queue.enabled to true.
   So do I need to add a another set of queues where 
auto-create-child-queue.enabled is true.
   ```
   testQueueWithNoAutoCreateChildQueue() {
   converter = builder
   .withCapacitySchedulerConfig(csConfig)
   .build();
   
   converter.convertQueueHierarchy(rootQueue);
   
   assertNoValueForQueues(ALL_QUEUES, ".auto-create-child-queue.enabled",
   csConfig);
 }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18598) maven site generation doesn't include javadocs

2023-01-19 Thread Masatake Iwasaki (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17679013#comment-17679013
 ] 

Masatake Iwasaki commented on HADOOP-18598:
---

[~ste...@apache.org] [Downgrading maven-site-plugin to 
3.9.1|https://github.com/apache/hadoop/pull/5319] (assuming 
maven-javadoc-plugin-3.0.1) could be a quick fix.

maven-site-plugin-3.11.0 assumes recent maven-javadoc-plugin (3.3.2).
https://github.com/apache/maven-site-plugin/blob/maven-site-plugin-3.11.0/pom.xml#L210

Bumping the version of maven-javadoc-plugin to 3.3.2 in Hadoop breaks build due 
to newly surfacing javadoc warnings.


> maven site generation doesn't include javadocs
> --
>
> Key: HADOOP-18598
> URL: https://issues.apache.org/jira/browse/HADOOP-18598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>
> the rc0 excluded all the site docs. running mvn site on trunk throws up site 
> plugin issues, which may be related, so start by updating that.
> rc validation scripts to include checks for the api/index.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18598) maven site generation doesn't include javadocs

2023-01-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18598:

Labels: pull-request-available  (was: )

> maven site generation doesn't include javadocs
> --
>
> Key: HADOOP-18598
> URL: https://issues.apache.org/jira/browse/HADOOP-18598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>
> the rc0 excluded all the site docs. running mvn site on trunk throws up site 
> plugin issues, which may be related, so start by updating that.
> rc validation scripts to include checks for the api/index.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18598) maven site generation doesn't include javadocs

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17679012#comment-17679012
 ] 

ASF GitHub Bot commented on HADOOP-18598:
-

iwasakims opened a new pull request, #5319:
URL: https://github.com/apache/hadoop/pull/5319

   https://issues.apache.org/jira/browse/HADOOP-18598
   
   maven-site-plugin-3.11.0 is incompatible with maven-javadoc-plugin-3.0.1. 
Downgrading maven-site-plugin to 3.9.1 fixes missing javadocs in aggregated 
site documentation.
   




> maven site generation doesn't include javadocs
> --
>
> Key: HADOOP-18598
> URL: https://issues.apache.org/jira/browse/HADOOP-18598
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: site
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>
> the rc0 excluded all the site docs. running mvn site on trunk throws up site 
> plugin issues, which may be related, so start by updating that.
> rc validation scripts to include checks for the api/index.html



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request, #5319: HADOOP-18598. maven site generation doesn't include javadocs.

2023-01-19 Thread GitBox


iwasakims opened a new pull request, #5319:
URL: https://github.com/apache/hadoop/pull/5319

   https://issues.apache.org/jira/browse/HADOOP-18598
   
   maven-site-plugin-3.11.0 is incompatible with maven-javadoc-plugin-3.0.1. 
Downgrading maven-site-plugin to 3.9.1 fixes missing javadocs in aggregated 
site documentation.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5302: YARN-11221. [Federation] Add replaceLabelsOnNodes, replaceLabelsOnNode REST APIs for Router.

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5302:
URL: https://github.com/apache/hadoop/pull/5302#issuecomment-1397977798

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  33m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 53s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/9/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  25m 52s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 40s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/9/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 106m 35s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 243m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5302 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 25b2b5ad37e0 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5284: YARN-11218. [Federation] Add getActivities, getBulkActivities REST APIs for Router.

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5284:
URL: https://github.com/apache/hadoop/pull/5284#issuecomment-1397969145

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 30s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 55s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5284/9/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  23m 23s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 40s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5284/9/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  98m 25s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 33s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 227m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5284/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5284 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f06e90bcad82 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 

[GitHub] [hadoop] susheel-gupta commented on a diff in pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-01-19 Thread GitBox


susheel-gupta commented on code in PR #5295:
URL: https://github.com/apache/hadoop/pull/5295#discussion_r1082128599


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java:
##
@@ -257,12 +260,6 @@ public void testGetMapCompletionEvents() throws 
IOException {
 createTce(3, false, TaskAttemptCompletionEventStatus.FAILED) };
 TaskAttemptCompletionEvent[] mapEvents = { taskEvents[0], taskEvents[2] };
 Job mockJob = mock(Job.class);
-when(mockJob.getTaskAttemptCompletionEvents(0, 100))
-  .thenReturn(taskEvents);
-when(mockJob.getTaskAttemptCompletionEvents(0, 2))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 0, 2));
-when(mockJob.getTaskAttemptCompletionEvents(2, 100))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 2, 4));

Review Comment:
   @szilard-nemeth Yes, this will work as there is no assertion for this method 
getTaskAttemptCompletionEvents.
   If it is required to add assertions for this method, then for now I can use 
lenient strictness and later in another jira I can add few assertions for this 
method.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5272: YARN-11217. [Federation] Add dumpSchedulerLogs REST APIs for Router.

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5272:
URL: https://github.com/apache/hadoop/pull/5272#issuecomment-1397907701

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 10s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 28s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 109m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5272/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5272 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2f96eb96627d 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a618f27c4759ca92c3c51d50917b60acdf1a9b0c |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5272/11/testReport/ |
   | Max. process+thread count | 585 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5272/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from 

[GitHub] [hadoop] slfan1989 commented on pull request #5260: YARN-8900. [Follow Up] Fix FederationInterceptorREST#invokeConcurrent Inaccurate Order of Subclusters.

2023-01-19 Thread GitBox


slfan1989 commented on PR #5260:
URL: https://github.com/apache/hadoop/pull/5260#issuecomment-1397817449

   @goiri Thank you very much for helping to review the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #5260: YARN-8900. [Follow Up] Fix FederationInterceptorREST#invokeConcurrent Inaccurate Order of Subclusters.

2023-01-19 Thread GitBox


goiri merged PR #5260:
URL: https://github.com/apache/hadoop/pull/5260


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5260: YARN-8900. [Follow Up] Fix FederationInterceptorREST#invokeConcurrent Inaccurate Order of Subclusters.

2023-01-19 Thread GitBox


slfan1989 commented on PR #5260:
URL: https://github.com/apache/hadoop/pull/5260#issuecomment-1397727689

   @goiri Can you help to merge this pr into the trunk branch? Thank you very 
much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678878#comment-17678878
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081716267


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {

Review Comment:
   Done, removed all usages. Maybe we can remove the class itself in another 
sub-task.





> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678877#comment-17678877
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081715049


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java:
##
@@ -340,22 +337,14 @@ public void doGet(HttpServletRequest request, 
HttpServletResponse response
 out.println(MARKER
 + "Submitted Class Name: " + logName + "");
 
-Log log = LogFactory.getLog(logName);
+Logger log = Logger.getLogger(logName);
 out.println(MARKER
 + "Log Class: " + log.getClass().getName() +"");
 if (level != null) {
   out.println(MARKER + "Submitted Level: " + level + "");
 }
 
-if (log instanceof Log4JLogger) {
-  process(((Log4JLogger)log).getLogger(), level, out);
-}
-else if (log instanceof Jdk14Logger) {
-  process(((Jdk14Logger)log).getLogger(), level, out);
-}
-else {
-  out.println("Sorry, " + log.getClass() + " not supported.");
-}
+process(log, level, out);

Review Comment:
   Since we are directly instantiating `org.apache.log4j Logger` instance, we 
don't need to check for unknown logger I believe. 
   
   If we can't set appropriate log type to logger, we would anyways print this 
in the `process()` as part of servlet output:
   ```
   out.println(MARKER + "Bad Level : " + level + "")
   ```
   
   Hence, i think we should be good here.





> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081716267


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {

Review Comment:
   Done, removed all usages. Maybe we can remove the class itself in another 
sub-task.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081715049


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java:
##
@@ -340,22 +337,14 @@ public void doGet(HttpServletRequest request, 
HttpServletResponse response
 out.println(MARKER
 + "Submitted Class Name: " + logName + "");
 
-Log log = LogFactory.getLog(logName);
+Logger log = Logger.getLogger(logName);
 out.println(MARKER
 + "Log Class: " + log.getClass().getName() +"");
 if (level != null) {
   out.println(MARKER + "Submitted Level: " + level + "");
 }
 
-if (log instanceof Log4JLogger) {
-  process(((Log4JLogger)log).getLogger(), level, out);
-}
-else if (log instanceof Jdk14Logger) {
-  process(((Jdk14Logger)log).getLogger(), level, out);
-}
-else {
-  out.println("Sorry, " + log.getClass() + " not supported.");
-}
+process(log, level, out);

Review Comment:
   Since we are directly instantiating `org.apache.log4j Logger` instance, we 
don't need to check for unknown logger I believe. 
   
   If we can't set appropriate log type to logger, we would anyways print this 
in the `process()` as part of servlet output:
   ```
   out.println(MARKER + "Bad Level : " + level + "")
   ```
   
   Hence, i think we should be good here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18399) SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678876#comment-17678876
 ] 

ASF GitHub Bot commented on HADOOP-18399:
-

virajjasani commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1397617517

   synced 

> SingleFilePerBlockCache to use LocalDirAllocator for file allocation
> 
>
> Key: HADOOP-18399
> URL: https://issues.apache.org/jira/browse/HADOOP-18399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> prefetching stream's SingleFilePerBlockCache uses Files.tempFile() to 
> allocate a temp file.
> it should be using LocalDirAllocator to allocate space from a list of dirs, 
> taking a config key to use. for s3a we will use the Constants.BUFFER_DIR 
> option, which on yarn deployments is fixed under the env.LOCAL_DIR path, so 
> automatically cleaned up on container exit



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5054: HADOOP-18399 Prefetch - SingleFilePerBlockCache to use LocalDirAllocator for file allocation

2023-01-19 Thread GitBox


virajjasani commented on PR #5054:
URL: https://github.com/apache/hadoop/pull/5054#issuecomment-1397617517

   synced -- pulled latest trunk commits to the branch


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18599) Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`

2023-01-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678873#comment-17678873
 ] 

Steve Loughran commented on HADOOP-18599:
-

1. AzureBlobFileSystemStore shouldn't be public at all, in fact we should make 
sure the @ scope annotations say so. we are free to change those methods 
whenever we feel like and without worrying about breaking anything
2. going through the 

if it makes you feel better, the specification language we use for the fs api 
is sort of python, or more subtly a specification language similar to Z but 
written in python so people can read and write it. 
https://github.com/apache/hadoop/tree/trunk/hadoop-common-project/hadoop-common/src/site/markdown/filesystem
however, you do need to spend time using the api, writing tests etc before 
defining new bits of the API

the problem here is one of long term commitment. you can use the 
AzureBlobFileSystemStore if you can, just don't be surprised if it breaks for 
no obvious reason, and know if you complain we will try to say "you shouldn't 
use that". 

but if the use case is relevant, well, it is something which a "cloud first" 
list API call could offer -as it would also benefit amazon s3 and be usable 
more broadly by other apps.

> Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`
> -
>
> Key: HADOOP-18599
> URL: https://issues.apache.org/jira/browse/HADOOP-18599
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.3.2, 3.3.4
>Reporter: Thomas Newton
>Priority: Major
>
> When working with Azure blob storage listing operations can often be quite 
> slow even on storage accounts with the hierarchical namespace. 
> This can be mitigated by listing only a specific subset of directories using 
> a function like 
> [https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.html#listStatus-org.apache.hadoop.fs.Path-java.lang.String-org.apache.hadoop.fs.azurebfs.utils.TracingContext-]
> Which accepts a `startFrom` argument and lists all files in order starting 
> from there.
> I'm wondering if we could add a method to the `AzureBlobFileSystem`
> Something like:
> ```
> public FileStatus[] listStatus(final Path f, final String startFrom) throws 
> IOException
> ```
> This exposes the functionality that already exists on the underlying 
> `AzureBlobFileSystemStore`. My understanding from reading a bit of the code 
> is that users should mainly be dealing with `AzureBlobFileSystem`s and 
> `AzureBlobFileSystem` seem easier to use to me hence the benefit of exposing 
> it on the `AzureBlobFileSystem`.
>  
> I'm very un-familiar with java but I'm told that keeping strictly to 
> interfaces is strongly preferred. However I can see some examples already on 
> `AzureBlobFileSystem` that do not belong to any interface (e.g. `breakLease`) 
> so I'm hoping its acceptable to add a method like I described only for the 
> one `FileSystem` implementation.
>  
> The specific motivation for this is to unblock 
> [https://github.com/delta-io/delta/issues/1568]
> I would be willing to contribute this if maintainers think the plan is 
> reasonable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18208) Remove all the log4j reference in modules other than hadoop-logging

2023-01-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-18208:
-

Assignee: Viraj Jasani

> Remove all the log4j reference in modules other than hadoop-logging
> ---
>
> Key: HADOOP-18208
> URL: https://issues.apache.org/jira/browse/HADOOP-18208
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18207) Introduce a hadoop-logging module

2023-01-19 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-18207:
-

Assignee: Viraj Jasani

> Introduce a hadoop-logging module
> -
>
> Key: HADOOP-18207
> URL: https://issues.apache.org/jira/browse/HADOOP-18207
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>
> There are several goals here:
> 1. Provide the ability to change log level, get log level, etc.
> 2. Place all the appender implementation(?)
> 3. Hide the real logging implementation.
> 4. Later we could remove all the log4j references in other hadoop module.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678858#comment-17678858
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

virajjasani commented on PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1397489869

   Javadoc warnings for common and spotbug warnings for mapreduce modules are 
not relevant to this change.




> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


virajjasani commented on PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1397489869

   Javadoc warnings for common and spotbug warnings for mapreduce modules are 
not relevant to this change.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678856#comment-17678856
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081718137


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {
-  private Log LOG;
-  private Logger LOGGER;
 
-  private LogAdapter(Log LOG) {
-this.LOG = LOG;
-  }
+  private final Logger LOGGER;

Review Comment:
   Done





> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678857#comment-17678857
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081718563


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:
##
@@ -318,9 +316,9 @@ public class DataNode extends ReconfigurableBase
 ", srvID: %s" +  // DatanodeRegistration
 ", blockid: %s" + // block id
 ", duration(ns): %s";  // duration time
-
-  static final Log ClientTraceLog =
-LogFactory.getLog(DataNode.class.getName() + ".clienttrace");
+
+  static final Logger ClientTraceLog =

Review Comment:
   Yes, taken care of all checkstyles so far.





> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081718563


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:
##
@@ -318,9 +316,9 @@ public class DataNode extends ReconfigurableBase
 ", srvID: %s" +  // DatanodeRegistration
 ", blockid: %s" + // block id
 ", duration(ns): %s";  // duration time
-
-  static final Log ClientTraceLog =
-LogFactory.getLog(DataNode.class.getName() + ".clienttrace");
+
+  static final Logger ClientTraceLog =

Review Comment:
   Yes, taken care of all checkstyles so far.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081718137


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {
-  private Log LOG;
-  private Logger LOGGER;
 
-  private LogAdapter(Log LOG) {
-this.LOG = LOG;
-  }
+  private final Logger LOGGER;

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678855#comment-17678855
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081715049


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java:
##
@@ -340,22 +337,14 @@ public void doGet(HttpServletRequest request, 
HttpServletResponse response
 out.println(MARKER
 + "Submitted Class Name: " + logName + "");
 
-Log log = LogFactory.getLog(logName);
+Logger log = Logger.getLogger(logName);
 out.println(MARKER
 + "Log Class: " + log.getClass().getName() +"");
 if (level != null) {
   out.println(MARKER + "Submitted Level: " + level + "");
 }
 
-if (log instanceof Log4JLogger) {
-  process(((Log4JLogger)log).getLogger(), level, out);
-}
-else if (log instanceof Jdk14Logger) {
-  process(((Jdk14Logger)log).getLogger(), level, out);
-}
-else {
-  out.println("Sorry, " + log.getClass() + " not supported.");
-}
+process(log, level, out);

Review Comment:
   Since we are directly instantiating `org.apache.log4j Logger` instance, we 
don't need to check for unknown logger I believe. 
   
   If we can't set appropriate log type to logger, we would anyways this in the 
`process()`:
   ```
   out.println(MARKER + "Bad Level : " + level + "")
   ```
   
   Hence, i think we should be good here.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java:
##
@@ -256,7 +255,7 @@ public static void skipFully(InputStream in, long len) 
throws IOException {
* instead
*/
   @Deprecated
-  public static void cleanup(Log log, java.io.Closeable... closeables) {
+  public static void cleanup(Logger log, java.io.Closeable... closeables) {

Review Comment:
   Updated it to use `cleanupWithLogger(log, closeables)` directly. Since the 
class is IA.Public, maybe we could wait a little longer before removing it 
completely?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {

Review Comment:
   Done, removed all usages. Maybe we can remove it in another sub-task.





> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


virajjasani commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081715049


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java:
##
@@ -340,22 +337,14 @@ public void doGet(HttpServletRequest request, 
HttpServletResponse response
 out.println(MARKER
 + "Submitted Class Name: " + logName + "");
 
-Log log = LogFactory.getLog(logName);
+Logger log = Logger.getLogger(logName);
 out.println(MARKER
 + "Log Class: " + log.getClass().getName() +"");
 if (level != null) {
   out.println(MARKER + "Submitted Level: " + level + "");
 }
 
-if (log instanceof Log4JLogger) {
-  process(((Log4JLogger)log).getLogger(), level, out);
-}
-else if (log instanceof Jdk14Logger) {
-  process(((Jdk14Logger)log).getLogger(), level, out);
-}
-else {
-  out.println("Sorry, " + log.getClass() + " not supported.");
-}
+process(log, level, out);

Review Comment:
   Since we are directly instantiating `org.apache.log4j Logger` instance, we 
don't need to check for unknown logger I believe. 
   
   If we can't set appropriate log type to logger, we would anyways this in the 
`process()`:
   ```
   out.println(MARKER + "Bad Level : " + level + "")
   ```
   
   Hence, i think we should be good here.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java:
##
@@ -256,7 +255,7 @@ public static void skipFully(InputStream in, long len) 
throws IOException {
* instead
*/
   @Deprecated
-  public static void cleanup(Log log, java.io.Closeable... closeables) {
+  public static void cleanup(Logger log, java.io.Closeable... closeables) {

Review Comment:
   Updated it to use `cleanupWithLogger(log, closeables)` directly. Since the 
class is IA.Public, maybe we could wait a little longer before removing it 
completely?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {

Review Comment:
   Done, removed all usages. Maybe we can remove it in another sub-task.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18599) Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`

2023-01-19 Thread Thomas Newton (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678845#comment-17678845
 ] 

Thomas Newton commented on HADOOP-18599:


Admittedly the functionality I'm interested in is already available on the 
`AzureBlobFileSystemStore` but as I said, I got the distinct impression that 
this is intended to always be used through a `FileSystem`  I particularly got 
this from stuff like 
[https://github.com/apache/hadoop/blob/cf7b7b961035d433b2c89f8dcd53016830d4d1a5/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/TracingContext.java#L54|https://github.com/apache/hadoop/blob/72b760130aee907de12db09d1123880b9935523f/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/TracingContext.java#L54]

 

> Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`
> -
>
> Key: HADOOP-18599
> URL: https://issues.apache.org/jira/browse/HADOOP-18599
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.3.2, 3.3.4
>Reporter: Thomas Newton
>Priority: Major
>
> When working with Azure blob storage listing operations can often be quite 
> slow even on storage accounts with the hierarchical namespace. 
> This can be mitigated by listing only a specific subset of directories using 
> a function like 
> [https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.html#listStatus-org.apache.hadoop.fs.Path-java.lang.String-org.apache.hadoop.fs.azurebfs.utils.TracingContext-]
> Which accepts a `startFrom` argument and lists all files in order starting 
> from there.
> I'm wondering if we could add a method to the `AzureBlobFileSystem`
> Something like:
> ```
> public FileStatus[] listStatus(final Path f, final String startFrom) throws 
> IOException
> ```
> This exposes the functionality that already exists on the underlying 
> `AzureBlobFileSystemStore`. My understanding from reading a bit of the code 
> is that users should mainly be dealing with `AzureBlobFileSystem`s and 
> `AzureBlobFileSystem` seem easier to use to me hence the benefit of exposing 
> it on the `AzureBlobFileSystem`.
>  
> I'm very un-familiar with java but I'm told that keeping strictly to 
> interfaces is strongly preferred. However I can see some examples already on 
> `AzureBlobFileSystem` that do not belong to any interface (e.g. `breakLease`) 
> so I'm hoping its acceptable to add a method like I described only for the 
> one `FileSystem` implementation.
>  
> The specific motivation for this is to unblock 
> [https://github.com/delta-io/delta/issues/1568]
> I would be willing to contribute this if maintainers think the plan is 
> reasonable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678832#comment-17678832
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

virajjasani commented on PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1397385744

   bummer, we can't completely get rid of it as commons-configuration needs it, 
so we will have to keep the version in our classpath but we won't use it in the 
codebase.




> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18599) Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`

2023-01-19 Thread Thomas Newton (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678831#comment-17678831
 ] 

Thomas Newton commented on HADOOP-18599:


Thanks for the response though what you suggest is indeed quite a scary 
suggestion.

Regarding using `listStatusIterator()` unfortunately this doesn't provide what 
I'm looking for. In my use-case I really only want to list about 5 files from 
directories that could contain many thousands of files. I know the name of the 
file I want to start listing from and I want to list files in order starting 
from there. 

Probably this is a niche use-case but I think it would be very valuable for 
[https://github.com/delta-io/delta/issues/1568|https://github.com/delta-io/delta/issues/1568.]
 . 

I think personally I cannot go through the full process you suggest to get a 
change like this. My limit is probably an Azure implementation and a few 
unittests (I've never used Java prior to now). Probably I will have to stick 
with maintaining a custom build of `hadoop-azure` :(.

 

 

 

> Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`
> -
>
> Key: HADOOP-18599
> URL: https://issues.apache.org/jira/browse/HADOOP-18599
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.3.2, 3.3.4
>Reporter: Thomas Newton
>Priority: Major
>
> When working with Azure blob storage listing operations can often be quite 
> slow even on storage accounts with the hierarchical namespace. 
> This can be mitigated by listing only a specific subset of directories using 
> a function like 
> [https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.html#listStatus-org.apache.hadoop.fs.Path-java.lang.String-org.apache.hadoop.fs.azurebfs.utils.TracingContext-]
> Which accepts a `startFrom` argument and lists all files in order starting 
> from there.
> I'm wondering if we could add a method to the `AzureBlobFileSystem`
> Something like:
> ```
> public FileStatus[] listStatus(final Path f, final String startFrom) throws 
> IOException
> ```
> This exposes the functionality that already exists on the underlying 
> `AzureBlobFileSystemStore`. My understanding from reading a bit of the code 
> is that users should mainly be dealing with `AzureBlobFileSystem`s and 
> `AzureBlobFileSystem` seem easier to use to me hence the benefit of exposing 
> it on the `AzureBlobFileSystem`.
>  
> I'm very un-familiar with java but I'm told that keeping strictly to 
> interfaces is strongly preferred. However I can see some examples already on 
> `AzureBlobFileSystem` that do not belong to any interface (e.g. `breakLease`) 
> so I'm hoping its acceptable to add a method like I described only for the 
> one `FileSystem` implementation.
>  
> The specific motivation for this is to unblock 
> [https://github.com/delta-io/delta/issues/1568]
> I would be willing to contribute this if maintainers think the plan is 
> reasonable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


virajjasani commented on PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1397385744

   bummer, we can't completely get rid of it as commons-configuration needs it, 
so we will have to keep the version in our classpath but we won't use it in the 
codebase.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18073) Upgrade AWS SDK to v2

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678825#comment-17678825
 ] 

ASF GitHub Bot commented on HADOOP-18073:
-

asfgit merged PR #5163:
URL: https://github.com/apache/hadoop/pull/5163




> Upgrade AWS SDK to v2
> -
>
> Key: HADOOP-18073
> URL: https://issues.apache.org/jira/browse/HADOOP-18073
> Project: Hadoop Common
>  Issue Type: Task
>  Components: auth, fs/s3
>Affects Versions: 3.3.1
>Reporter: xiaowei sun
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
> Attachments: Upgrading S3A to SDKV2.pdf
>
>
> This task tracks upgrading Hadoop's AWS connector S3A from AWS SDK for Java 
> V1 to AWS SDK for Java V2.
> Original use case:
> {quote}We would like to access s3 with AWS SSO, which is supported in 
> software.amazon.awssdk:sdk-core:2.*.
> In particular, from 
> [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html],
>  when to set 'fs.s3a.aws.credentials.provider', it must be 
> "com.amazonaws.auth.AWSCredentialsProvider". We would like to support 
> "software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider" which 
> supports AWS SSO, so users only need to authenticate once.
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18073) Upgrade AWS SDK to v2

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678826#comment-17678826
 ] 

ASF GitHub Bot commented on HADOOP-18073:
-

steveloughran commented on PR #5163:
URL: https://github.com/apache/hadoop/pull/5163#issuecomment-1397372151

   merged!




> Upgrade AWS SDK to v2
> -
>
> Key: HADOOP-18073
> URL: https://issues.apache.org/jira/browse/HADOOP-18073
> Project: Hadoop Common
>  Issue Type: Task
>  Components: auth, fs/s3
>Affects Versions: 3.3.1
>Reporter: xiaowei sun
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
> Attachments: Upgrading S3A to SDKV2.pdf
>
>
> This task tracks upgrading Hadoop's AWS connector S3A from AWS SDK for Java 
> V1 to AWS SDK for Java V2.
> Original use case:
> {quote}We would like to access s3 with AWS SSO, which is supported in 
> software.amazon.awssdk:sdk-core:2.*.
> In particular, from 
> [https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html],
>  when to set 'fs.s3a.aws.credentials.provider', it must be 
> "com.amazonaws.auth.AWSCredentialsProvider". We would like to support 
> "software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider" which 
> supports AWS SSO, so users only need to authenticate once.
> {quote}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5163: HADOOP-18073. Upgrade AWS SDK to v2 in S3A [work in progress]

2023-01-19 Thread GitBox


steveloughran commented on PR #5163:
URL: https://github.com/apache/hadoop/pull/5163#issuecomment-1397372151

   merged!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] asfgit merged pull request #5163: HADOOP-18073. Upgrade AWS SDK to v2 in S3A [work in progress]

2023-01-19 Thread GitBox


asfgit merged PR #5163:
URL: https://github.com/apache/hadoop/pull/5163


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678819#comment-17678819
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

hadoop-yetus commented on PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1397350094

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 22 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 23s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  23m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  19m  1s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   7m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 21s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   3m 51s | 
[/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html)
 |  hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant 
spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 24s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  31m 58s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  54m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 40s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  43m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |  22m 43s | 
[/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 2 new + 2815 unchanged - 4 
fixed = 2817 total (was 2819)  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |  20m 36s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 5 new + 2612 unchanged - 4 fixed 
= 2617 total (was 2616)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 42s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 656 unchanged - 16 fixed = 664 total (was 
672)  |
   | +1 :green_heart: |  mvnsite 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#issuecomment-1397350094

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 22 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 23s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 33s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  23m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  19m  1s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 21s | 
[/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   7m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 21s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   3m 51s | 
[/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/branch-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client-warnings.html)
 |  hadoop-mapreduce-project/hadoop-mapreduce-client in trunk has 1 extant 
spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 24s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |  31m 58s | 
[/branch-spotbugs-root-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/branch-spotbugs-root-warnings.html)
 |  root in trunk has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  54m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 40s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  43m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |  22m 43s | 
[/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 2 new + 2815 unchanged - 4 
fixed = 2817 total (was 2819)  |
   | +1 :green_heart: |  compile  |  20m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |  20m 36s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 5 new + 2612 unchanged - 4 fixed 
= 2617 total (was 2616)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 42s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5315/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 8 new + 656 unchanged - 16 fixed = 664 total (was 
672)  |
   | +1 :green_heart: |  mvnsite  |  18m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | -1 :x: |  javadoc  |   1m 11s | 

[jira] [Updated] (HADOOP-18599) Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`

2023-01-19 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18599:

Priority: Major  (was: Minor)

> Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`
> -
>
> Key: HADOOP-18599
> URL: https://issues.apache.org/jira/browse/HADOOP-18599
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.3.2, 3.3.4
>Reporter: Thomas Newton
>Priority: Major
>
> When working with Azure blob storage listing operations can often be quite 
> slow even on storage accounts with the hierarchical namespace. 
> This can be mitigated by listing only a specific subset of directories using 
> a function like 
> [https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.html#listStatus-org.apache.hadoop.fs.Path-java.lang.String-org.apache.hadoop.fs.azurebfs.utils.TracingContext-]
> Which accepts a `startFrom` argument and lists all files in order starting 
> from there.
> I'm wondering if we could add a method to the `AzureBlobFileSystem`
> Something like:
> ```
> public FileStatus[] listStatus(final Path f, final String startFrom) throws 
> IOException
> ```
> This exposes the functionality that already exists on the underlying 
> `AzureBlobFileSystemStore`. My understanding from reading a bit of the code 
> is that users should mainly be dealing with `AzureBlobFileSystem`s and 
> `AzureBlobFileSystem` seem easier to use to me hence the benefit of exposing 
> it on the `AzureBlobFileSystem`.
>  
> I'm very un-familiar with java but I'm told that keeping strictly to 
> interfaces is strongly preferred. However I can see some examples already on 
> `AzureBlobFileSystem` that do not belong to any interface (e.g. `breakLease`) 
> so I'm hoping its acceptable to add a method like I described only for the 
> one `FileSystem` implementation.
>  
> The specific motivation for this is to unblock 
> [https://github.com/delta-io/delta/issues/1568]
> I would be willing to contribute this if maintainers think the plan is 
> reasonable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18599) Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`

2023-01-19 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18599:

Issue Type: New Feature  (was: Improvement)

> Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`
> -
>
> Key: HADOOP-18599
> URL: https://issues.apache.org/jira/browse/HADOOP-18599
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Affects Versions: 3.3.2, 3.3.4
>Reporter: Thomas Newton
>Priority: Minor
>
> When working with Azure blob storage listing operations can often be quite 
> slow even on storage accounts with the hierarchical namespace. 
> This can be mitigated by listing only a specific subset of directories using 
> a function like 
> [https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.html#listStatus-org.apache.hadoop.fs.Path-java.lang.String-org.apache.hadoop.fs.azurebfs.utils.TracingContext-]
> Which accepts a `startFrom` argument and lists all files in order starting 
> from there.
> I'm wondering if we could add a method to the `AzureBlobFileSystem`
> Something like:
> ```
> public FileStatus[] listStatus(final Path f, final String startFrom) throws 
> IOException
> ```
> This exposes the functionality that already exists on the underlying 
> `AzureBlobFileSystemStore`. My understanding from reading a bit of the code 
> is that users should mainly be dealing with `AzureBlobFileSystem`s and 
> `AzureBlobFileSystem` seem easier to use to me hence the benefit of exposing 
> it on the `AzureBlobFileSystem`.
>  
> I'm very un-familiar with java but I'm told that keeping strictly to 
> interfaces is strongly preferred. However I can see some examples already on 
> `AzureBlobFileSystem` that do not belong to any interface (e.g. `breakLease`) 
> so I'm hoping its acceptable to add a method like I described only for the 
> one `FileSystem` implementation.
>  
> The specific motivation for this is to unblock 
> [https://github.com/delta-io/delta/issues/1568]
> I would be willing to contribute this if maintainers think the plan is 
> reasonable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18599) Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`

2023-01-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18599?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678817#comment-17678817
 ] 

Steve Loughran commented on HADOOP-18599:
-

all public FS APIs need to go into hadoop-common with
* designs which can be implemented in other filesystems (e.g. s3)
* a strict specification to define that behaviour
* A set of contract tests derived from that specification to verify that all 
implementations match the spec
* ideally implementations for > 1 store to show it is flexible.
There is also an implicit commitment to maintain that indefinitely. Which you 
would probably expect even for an abfs only change

If this scares you off it is with good reason -it's really hard to get the 
stuff in. If one was to be added, it should be a builder api and return a 
RemoteIterator<>.

Now, before you start on that: why don't you use listStatusIterator() instead, 
*because it and the s3a one return the result a page at at time, while 
asynchronously prefetching the next page*. You only need to block for the first 
page of results and can then process it while the next one is retrieved for you.

Isn't that what you wanted?



> Expose `listStatus(Path path, String startFrom)` on `AzureBlobFileSystem`
> -
>
> Key: HADOOP-18599
> URL: https://issues.apache.org/jira/browse/HADOOP-18599
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.2, 3.3.4
>Reporter: Thomas Newton
>Priority: Minor
>
> When working with Azure blob storage listing operations can often be quite 
> slow even on storage accounts with the hierarchical namespace. 
> This can be mitigated by listing only a specific subset of directories using 
> a function like 
> [https://hadoop.apache.org/docs/r3.3.4/api/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.html#listStatus-org.apache.hadoop.fs.Path-java.lang.String-org.apache.hadoop.fs.azurebfs.utils.TracingContext-]
> Which accepts a `startFrom` argument and lists all files in order starting 
> from there.
> I'm wondering if we could add a method to the `AzureBlobFileSystem`
> Something like:
> ```
> public FileStatus[] listStatus(final Path f, final String startFrom) throws 
> IOException
> ```
> This exposes the functionality that already exists on the underlying 
> `AzureBlobFileSystemStore`. My understanding from reading a bit of the code 
> is that users should mainly be dealing with `AzureBlobFileSystem`s and 
> `AzureBlobFileSystem` seem easier to use to me hence the benefit of exposing 
> it on the `AzureBlobFileSystem`.
>  
> I'm very un-familiar with java but I'm told that keeping strictly to 
> interfaces is strongly preferred. However I can see some examples already on 
> `AzureBlobFileSystem` that do not belong to any interface (e.g. `breakLease`) 
> so I'm hoping its acceptable to add a method like I described only for the 
> one `FileSystem` implementation.
>  
> The specific motivation for this is to unblock 
> [https://github.com/delta-io/delta/issues/1568]
> I would be willing to contribute this if maintainers think the plan is 
> reasonable. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18600) Hadoop 2.x should support s3a committers

2023-01-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678810#comment-17678810
 ] 

Steve Loughran commented on HADOOP-18600:
-

finally, why can't you upgrade? because if you want to use this with spark 
(presumably), you need to have a recent version of spark to match. And the 
binding code you need there is only built in  spark-hadoop-cloud with the 
hadoop 3 profile. So it won't be there *unless you use a version of spark built 
against hadoop-3.3+"

you are fee to backport the feature yourself and modify your (private) spark 
build to match.

> Hadoop 2.x should support s3a committers
> 
>
> Key: HADOOP-18600
> URL: https://issues.apache.org/jira/browse/HADOOP-18600
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs/s3
>Affects Versions: 2.10.2
>Reporter: KaiXinXIaoLei
>Priority: Major
>
> I think the feature about "Add S3A committers for zero-rename commits to S3 
> endpoints" (https://issues.apache.org/jira/browse/HADOOP-13786)  should be 蜜珀 
> in the merged in hadoop 2.10.2. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-18600) Hadoop 2.x should support s3a committers

2023-01-19 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-18600.
-
Resolution: Won't Fix

> Hadoop 2.x should support s3a committers
> 
>
> Key: HADOOP-18600
> URL: https://issues.apache.org/jira/browse/HADOOP-18600
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs/s3
>Affects Versions: 2.10.2
>Reporter: KaiXinXIaoLei
>Priority: Major
>
> I think the feature about "Add S3A committers for zero-rename commits to S3 
> endpoints" (https://issues.apache.org/jira/browse/HADOOP-13786)  should be 蜜珀 
> in the merged in hadoop 2.10.2. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18600) Hadoop 2.x should support s3a committers

2023-01-19 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18600:

Summary: Hadoop 2.x should support s3a committers  (was: Hadoop 2.x should 
support s3a)

> Hadoop 2.x should support s3a committers
> 
>
> Key: HADOOP-18600
> URL: https://issues.apache.org/jira/browse/HADOOP-18600
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs/s3
>Affects Versions: 2.10.2
>Reporter: KaiXinXIaoLei
>Priority: Major
>
> I think the feature about "Add S3A committers for zero-rename commits to S3 
> endpoints" (https://issues.apache.org/jira/browse/HADOOP-13786)  should be 蜜珀 
> in the merged in hadoop 2.10.2. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18600) Hadoop 2.x should support s3a

2023-01-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678807#comment-17678807
 ] 

Steve Loughran commented on HADOOP-18600:
-

no. you want s3a features written in the past five years, you get to upgrade to 
a modern version.

* the s3a code has evolved so much that even backporting stuff from 3.3.5 to 
3.2 is nearly impossible
* backporting a feature is declaring a commitment to maintain/support

finally, branch-2 is getting nothing but those critical CVEs which can be 
fixed. no features

> Hadoop 2.x should support s3a
> -
>
> Key: HADOOP-18600
> URL: https://issues.apache.org/jira/browse/HADOOP-18600
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs/s3
>Affects Versions: 2.10.2
>Reporter: KaiXinXIaoLei
>Priority: Major
>
> I think the feature about "Add S3A committers for zero-rename commits to S3 
> endpoints" (https://issues.apache.org/jira/browse/HADOOP-13786)  should be 蜜珀 
> in the merged in hadoop 2.10.2. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K commented on pull request #5311: MAPREDUCE-7431. ShuffleHandler refactor and fix after Netty4 upgrade.

2023-01-19 Thread GitBox


K0K0V0K commented on PR #5311:
URL: https://github.com/apache/hadoop/pull/5311#issuecomment-1397239024

   +1, non-binding


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18596) Distcp -update between different cloud stores to use modification time while checking for file skip.

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678783#comment-17678783
 ] 

ASF GitHub Bot commented on HADOOP-18596:
-

mehakmeet commented on code in PR #5308:
URL: https://github.com/apache/hadoop/pull/5308#discussion_r1081471243


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java:
##
@@ -354,7 +354,14 @@ private boolean canSkip(FileSystem sourceFS, 
CopyListingFileStatus source,
 boolean sameLength = target.getLen() == source.getLen();
 boolean sameBlockSize = source.getBlockSize() == target.getBlockSize()
 || !preserve.contains(FileAttribute.BLOCKSIZE);
-if (sameLength && sameBlockSize) {
+// checksum check to be done if same file len(greater than 0), same block
+// size and the target file has been updated more recently than the source
+// file.
+// Note: For Different cloud stores with different checksum algorithms,
+// checksum comparisons are not performed so we would be depending on the
+// file size and modification time.
+if (sameLength && (source.getLen() > 0) && sameBlockSize &&
+source.getModificationTime() < target.getModificationTime()) {

Review Comment:
   Ah, I actually had to add a check of if the file size is 0 to skip it every 
time before this check, forgot to add it in this version locally . Good catch. 





> Distcp -update between different cloud stores to use modification time while 
> checking for file skip.
> 
>
> Key: HADOOP-18596
> URL: https://issues.apache.org/jira/browse/HADOOP-18596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>
> Distcp -update currently relies on File size, block size, and Checksum 
> comparisons to figure out which files should be skipped or copied. 
> Since different cloud stores have different checksum algorithms we should 
> check for modification time as well to the checks.
> This would ensure that while performing -update if the files are perceived to 
> be out of sync we should copy them. The machines between which the file 
> transfers occur should be in time sync to avoid any extra copies.
> Improving testing and documentation for modification time checks between 
> different object stores to ensure no incorrect skipping of files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on a diff in pull request #5308: HADOOP-18596. Distcp -update to use modification time while checking for file skip.

2023-01-19 Thread GitBox


mehakmeet commented on code in PR #5308:
URL: https://github.com/apache/hadoop/pull/5308#discussion_r1081471243


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java:
##
@@ -354,7 +354,14 @@ private boolean canSkip(FileSystem sourceFS, 
CopyListingFileStatus source,
 boolean sameLength = target.getLen() == source.getLen();
 boolean sameBlockSize = source.getBlockSize() == target.getBlockSize()
 || !preserve.contains(FileAttribute.BLOCKSIZE);
-if (sameLength && sameBlockSize) {
+// checksum check to be done if same file len(greater than 0), same block
+// size and the target file has been updated more recently than the source
+// file.
+// Note: For Different cloud stores with different checksum algorithms,
+// checksum comparisons are not performed so we would be depending on the
+// file size and modification time.
+if (sameLength && (source.getLen() > 0) && sameBlockSize &&
+source.getModificationTime() < target.getModificationTime()) {

Review Comment:
   Ah, I actually had to add a check of if the file size is 0 to skip it every 
time before this check, forgot to add it in this version locally . Good catch. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18596) Distcp -update between different cloud stores to use modification time while checking for file skip.

2023-01-19 Thread Daniel Carl Jones (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678780#comment-17678780
 ] 

Daniel Carl Jones commented on HADOOP-18596:


{quote}What Mehakmeet proposes is possible, doesn't add any risk of reduced 
copy (only increased copies) and fairly easy to test.
{quote}
So long as we meet this, i.e. we only potentially cause more files to be 
included in the update, then this change seems fine. Some users may find more 
files being copied than usual, but they are already exposed to the risk of 
newer safe length files not being copied when they should have been - will 
communicating this bug fix in change notes be enough?
{quote}We should look out that there shouldn't be a massive difference between 
the clocks so that the updation of the source files from one version to another 
should be more recent than the previous version being synced to cloud storage 
for example.
{quote}
Related to this - any way we can have DistCp abort the copy if it detects the 
source and destination are drifted beyond some acceptable threshold? Perhaps a 
separate Jira if it is a feasible check to add.

> Distcp -update between different cloud stores to use modification time while 
> checking for file skip.
> 
>
> Key: HADOOP-18596
> URL: https://issues.apache.org/jira/browse/HADOOP-18596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>
> Distcp -update currently relies on File size, block size, and Checksum 
> comparisons to figure out which files should be skipped or copied. 
> Since different cloud stores have different checksum algorithms we should 
> check for modification time as well to the checks.
> This would ensure that while performing -update if the files are perceived to 
> be out of sync we should copy them. The machines between which the file 
> transfers occur should be in time sync to avoid any extra copies.
> Improving testing and documentation for modification time checks between 
> different object stores to ensure no incorrect skipping of files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18592) Sasl connection failure should log remote address

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678779#comment-17678779
 ] 

ASF GitHub Bot commented on HADOOP-18592:
-

virajjasani commented on PR #5294:
URL: https://github.com/apache/hadoop/pull/5294#issuecomment-1397191522

   @steveloughran, updated the PR based on your latest review.




> Sasl connection failure should log remote address
> -
>
> Key: HADOOP-18592
> URL: https://issues.apache.org/jira/browse/HADOOP-18592
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.4
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> If Sasl connection fails with some generic error, we miss logging remote 
> server that the client was trying to connect to.
> Sample log:
> {code:java}
> 2023-01-12 00:22:28,148 WARN  [20%2C1673404849949,1] ipc.Client - Exception 
> encountered while connecting to the server 
> java.io.IOException: Connection reset by peer
>     at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
>     at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
>     at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
>     at sun.nio.ch.IOUtil.read(IOUtil.java:197)
>     at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
>     at 
> org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
>     at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:141)
>     at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>     at 
> org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>     at java.io.FilterInputStream.read(FilterInputStream.java:133)
>     at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>     at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
>     at java.io.DataInputStream.readInt(DataInputStream.java:387)
>     at org.apache.hadoop.ipc.Client$IpcStreams.readResponse(Client.java:1950)
>     at 
> org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:367)
>     at 
> org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:623)
>     at org.apache.hadoop.ipc.Client$Connection.access$2300(Client.java:414)
> ...
> ... {code}
> We should log the remote server address.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5294: HADOOP-18592 Sasl connection failure should log remote address

2023-01-19 Thread GitBox


virajjasani commented on PR #5294:
URL: https://github.com/apache/hadoop/pull/5294#issuecomment-1397191522

   @steveloughran, updated the PR based on your latest review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18596) Distcp -update between different cloud stores to use modification time while checking for file skip.

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678768#comment-17678768
 ] 

ASF GitHub Bot commented on HADOOP-18596:
-

dannycjones commented on code in PR #5308:
URL: https://github.com/apache/hadoop/pull/5308#discussion_r1081452359


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java:
##
@@ -354,7 +354,14 @@ private boolean canSkip(FileSystem sourceFS, 
CopyListingFileStatus source,
 boolean sameLength = target.getLen() == source.getLen();
 boolean sameBlockSize = source.getBlockSize() == target.getBlockSize()
 || !preserve.contains(FileAttribute.BLOCKSIZE);
-if (sameLength && sameBlockSize) {
+// checksum check to be done if same file len(greater than 0), same block
+// size and the target file has been updated more recently than the source
+// file.
+// Note: For Different cloud stores with different checksum algorithms,
+// checksum comparisons are not performed so we would be depending on the
+// file size and modification time.
+if (sameLength && (source.getLen() > 0) && sameBlockSize &&
+source.getModificationTime() < target.getModificationTime()) {

Review Comment:
   Why the addition of the `getLen() > 0`? We want to always copy if its an 
empty file?





> Distcp -update between different cloud stores to use modification time while 
> checking for file skip.
> 
>
> Key: HADOOP-18596
> URL: https://issues.apache.org/jira/browse/HADOOP-18596
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>
> Distcp -update currently relies on File size, block size, and Checksum 
> comparisons to figure out which files should be skipped or copied. 
> Since different cloud stores have different checksum algorithms we should 
> check for modification time as well to the checks.
> This would ensure that while performing -update if the files are perceived to 
> be out of sync we should copy them. The machines between which the file 
> transfers occur should be in time sync to avoid any extra copies.
> Improving testing and documentation for modification time checks between 
> different object stores to ensure no incorrect skipping of files.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dannycjones commented on a diff in pull request #5308: HADOOP-18596. Distcp -update to use modification time while checking for file skip.

2023-01-19 Thread GitBox


dannycjones commented on code in PR #5308:
URL: https://github.com/apache/hadoop/pull/5308#discussion_r1081452359


##
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyMapper.java:
##
@@ -354,7 +354,14 @@ private boolean canSkip(FileSystem sourceFS, 
CopyListingFileStatus source,
 boolean sameLength = target.getLen() == source.getLen();
 boolean sameBlockSize = source.getBlockSize() == target.getBlockSize()
 || !preserve.contains(FileAttribute.BLOCKSIZE);
-if (sameLength && sameBlockSize) {
+// checksum check to be done if same file len(greater than 0), same block
+// size and the target file has been updated more recently than the source
+// file.
+// Note: For Different cloud stores with different checksum algorithms,
+// checksum comparisons are not performed so we would be depending on the
+// file size and modification time.
+if (sameLength && (source.getLen() > 0) && sameBlockSize &&
+source.getModificationTime() < target.getModificationTime()) {

Review Comment:
   Why the addition of the `getLen() > 0`? We want to always copy if its an 
empty file?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5317: YARN-11420 Stabilize TestNMClient

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5317:
URL: https://github.com/apache/hadoop/pull/5317#issuecomment-1397143914

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 40s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/2/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |   0m 25s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/2/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 1 new + 78 
unchanged - 2 fixed = 79 total (was 80)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 4 
new + 1 unchanged - 11 fixed = 5 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 21s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/2/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  28m 33s |  |  hadoop-yarn-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 133m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5317 |
   | Optional Tests | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5317: YARN-11420 Stabilize TestNMClient

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5317:
URL: https://github.com/apache/hadoop/pull/5317#issuecomment-1397143337

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 38s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/4/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |   0m 26s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/4/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 2 new + 74 
unchanged - 3 fixed = 76 total (was 77)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 4 
new + 1 unchanged - 11 fixed = 5 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/4/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  28m  7s |  |  hadoop-yarn-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 130m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5317 |
   | Optional Tests | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5317: YARN-11420 Stabilize TestNMClient

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5317:
URL: https://github.com/apache/hadoop/pull/5317#issuecomment-1397140540

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 25s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 16s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 38s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/3/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 77 
unchanged - 2 fixed = 77 total (was 79)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/3/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 4 
new + 1 unchanged - 11 fixed = 5 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 21s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/3/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  28m 11s |  |  hadoop-yarn-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 130m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5317 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 87e5918061d6 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5318: YARN-11412 Concurrent user management

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5318:
URL: https://github.com/apache/hadoop/pull/5318#issuecomment-1397089189

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 10 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 54s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/2/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 41s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  27m 58s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 43s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 123 new + 602 unchanged - 48 fixed = 725 total (was 650)  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 39s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/2/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  javadoc  |   0m 35s | 
[/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/2/artifact/out/results-javadoc-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 6 new + 343 
unchanged - 0 fixed = 349 total (was 343)  |
   | +1 :green_heart: |  spotbugs  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
 

[jira] [Commented] (HADOOP-18206) Cleanup the commons-logging references in the code base

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678738#comment-17678738
 ] 

ASF GitHub Bot commented on HADOOP-18206:
-

steveloughran commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081327340


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java:
##
@@ -340,22 +337,14 @@ public void doGet(HttpServletRequest request, 
HttpServletResponse response
 out.println(MARKER
 + "Submitted Class Name: " + logName + "");
 
-Log log = LogFactory.getLog(logName);
+Logger log = Logger.getLogger(logName);
 out.println(MARKER
 + "Log Class: " + log.getClass().getName() +"");
 if (level != null) {
   out.println(MARKER + "Submitted Level: " + level + "");
 }
 
-if (log instanceof Log4JLogger) {
-  process(((Log4JLogger)log).getLogger(), level, out);
-}
-else if (log instanceof Jdk14Logger) {
-  process(((Jdk14Logger)log).getLogger(), level, out);
-}
-else {
-  out.println("Sorry, " + log.getClass() + " not supported.");
-}
+process(log, level, out);

Review Comment:
   still need to handle situations (logback..) where the logger is 
unknown/unsupported, right?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {

Review Comment:
   this class is obsolete. tag as deprecated and see if all uses can be removed.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {
-  private Log LOG;
-  private Logger LOGGER;
 
-  private LogAdapter(Log LOG) {
-this.LOG = LOG;
-  }
+  private final Logger LOGGER;

Review Comment:
   given its not final, case is wrong. 
   
   also, it's never going to be null, so the checks can be cut down below



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:
##
@@ -318,9 +316,9 @@ public class DataNode extends ReconfigurableBase
 ", srvID: %s" +  // DatanodeRegistration
 ", blockid: %s" + // block id
 ", duration(ns): %s";  // duration time
-
-  static final Log ClientTraceLog =
-LogFactory.getLog(DataNode.class.getName() + ".clienttrace");
+
+  static final Logger ClientTraceLog =

Review Comment:
   this sould really be CLIENT_TRACE_LOG, shouldn't it?



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestJarFinder.java:
##
@@ -39,14 +38,6 @@
 
 public class TestJarFinder {
 
-  @Test

Review Comment:
   test should be replaced with anothe class we know is there, maybe one of the 
junit ones



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java:
##
@@ -256,7 +255,7 @@ public static void skipFully(InputStream in, long len) 
throws IOException {
* instead
*/
   @Deprecated
-  public static void cleanup(Log log, java.io.Closeable... closeables) {
+  public static void cleanup(Logger log, java.io.Closeable... closeables) {

Review Comment:
   i'd prefer the method to be completely cut and mvoed to cleanupWithLogger; 
they are now the same methods so you can just invoke that from this and the use 
the IDE to remove the method





> Cleanup the commons-logging references in the code base
> ---
>
> Key: HADOOP-18206
> URL: https://issues.apache.org/jira/browse/HADOOP-18206
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Should always use slf4j



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5315: HADOOP-18206 Cleanup the commons-logging references and restrict its usage in future

2023-01-19 Thread GitBox


steveloughran commented on code in PR #5315:
URL: https://github.com/apache/hadoop/pull/5315#discussion_r1081327340


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/LogLevel.java:
##
@@ -340,22 +337,14 @@ public void doGet(HttpServletRequest request, 
HttpServletResponse response
 out.println(MARKER
 + "Submitted Class Name: " + logName + "");
 
-Log log = LogFactory.getLog(logName);
+Logger log = Logger.getLogger(logName);
 out.println(MARKER
 + "Log Class: " + log.getClass().getName() +"");
 if (level != null) {
   out.println(MARKER + "Submitted Level: " + level + "");
 }
 
-if (log instanceof Log4JLogger) {
-  process(((Log4JLogger)log).getLogger(), level, out);
-}
-else if (log instanceof Jdk14Logger) {
-  process(((Jdk14Logger)log).getLogger(), level, out);
-}
-else {
-  out.println("Sorry, " + log.getClass() + " not supported.");
-}
+process(log, level, out);

Review Comment:
   still need to handle situations (logback..) where the logger is 
unknown/unsupported, right?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {

Review Comment:
   this class is obsolete. tag as deprecated and see if all uses can be removed.



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/LogAdapter.java:
##
@@ -17,61 +17,40 @@
  */
 package org.apache.hadoop.util;
 
-import org.apache.commons.logging.Log;
 import org.slf4j.Logger;
 
 class LogAdapter {
-  private Log LOG;
-  private Logger LOGGER;
 
-  private LogAdapter(Log LOG) {
-this.LOG = LOG;
-  }
+  private final Logger LOGGER;

Review Comment:
   given its not final, case is wrong. 
   
   also, it's never going to be null, so the checks can be cut down below



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:
##
@@ -318,9 +316,9 @@ public class DataNode extends ReconfigurableBase
 ", srvID: %s" +  // DatanodeRegistration
 ", blockid: %s" + // block id
 ", duration(ns): %s";  // duration time
-
-  static final Log ClientTraceLog =
-LogFactory.getLog(DataNode.class.getName() + ".clienttrace");
+
+  static final Logger ClientTraceLog =

Review Comment:
   this sould really be CLIENT_TRACE_LOG, shouldn't it?



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestJarFinder.java:
##
@@ -39,14 +38,6 @@
 
 public class TestJarFinder {
 
-  @Test

Review Comment:
   test should be replaced with anothe class we know is there, maybe one of the 
junit ones



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/IOUtils.java:
##
@@ -256,7 +255,7 @@ public static void skipFully(InputStream in, long len) 
throws IOException {
* instead
*/
   @Deprecated
-  public static void cleanup(Log log, java.io.Closeable... closeables) {
+  public static void cleanup(Logger log, java.io.Closeable... closeables) {

Review Comment:
   i'd prefer the method to be completely cut and mvoed to cleanupWithLogger; 
they are now the same methods so you can just invoke that from this and the use 
the IDE to remove the method



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5318: YARN-11412 Concurrent user management

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5318:
URL: https://github.com/apache/hadoop/pull/5318#issuecomment-1397073307

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 10 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 22s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  2s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 54s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  28m 17s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 43s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 123 new + 602 unchanged - 48 fixed = 725 total (was 650)  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 44s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  javadoc  |   0m 39s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5318/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | +1 :green_heart: |  spotbugs  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 107m 48s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5316: Yarn 11420

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5316:
URL: https://github.com/apache/hadoop/pull/5316#issuecomment-1397008425

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 44s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5316/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   9m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |   8m 22s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5316/1/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 
with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 2 new + 635 
unchanged - 4 fixed = 637 total (was 639)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 38s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5316/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 11 new + 1 unchanged - 
14 fixed = 12 total (was 15)  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 39s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5316/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 45s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  28m 41s |  |  hadoop-yarn-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 194m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 

[GitHub] [hadoop] tomicooler commented on a diff in pull request #5311: MAPREDUCE-7431. ShuffleHandler refactor and fix after Netty4 upgrade.

2023-01-19 Thread GitBox


tomicooler commented on code in PR #5311:
URL: https://github.com/apache/hadoop/pull/5311#discussion_r1081262415


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleChannelHandler.java:
##
@@ -0,0 +1,715 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.mapred;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelFutureListener;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPipeline;
+import io.netty.channel.SimpleChannelInboundHandler;
+import io.netty.handler.codec.TooLongFrameException;
+import io.netty.handler.codec.http.DefaultFullHttpResponse;
+import io.netty.handler.codec.http.DefaultHttpResponse;
+import io.netty.handler.codec.http.FullHttpRequest;
+import io.netty.handler.codec.http.FullHttpResponse;
+import io.netty.handler.codec.http.HttpRequest;
+import io.netty.handler.codec.http.HttpResponse;
+import io.netty.handler.codec.http.HttpResponseStatus;
+import io.netty.handler.codec.http.HttpUtil;
+import io.netty.handler.codec.http.LastHttpContent;
+import io.netty.handler.codec.http.QueryStringDecoder;
+import io.netty.handler.ssl.SslHandler;
+import io.netty.util.CharsetUtil;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.net.URL;
+import java.nio.channels.ClosedChannelException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import javax.crypto.SecretKey;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.SecureIOUtils;
+import org.apache.hadoop.mapreduce.security.SecureShuffleUtils;
+import org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader;
+import org.apache.hadoop.thirdparty.com.google.common.base.Charsets;
+import org.eclipse.jetty.http.HttpHeader;
+
+import static io.netty.buffer.Unpooled.wrappedBuffer;
+import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_TYPE;
+import static io.netty.handler.codec.http.HttpMethod.GET;
+import static io.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST;
+import static io.netty.handler.codec.http.HttpResponseStatus.FORBIDDEN;
+import static 
io.netty.handler.codec.http.HttpResponseStatus.INTERNAL_SERVER_ERROR;
+import static 
io.netty.handler.codec.http.HttpResponseStatus.METHOD_NOT_ALLOWED;
+import static io.netty.handler.codec.http.HttpResponseStatus.OK;
+import static io.netty.handler.codec.http.HttpResponseStatus.UNAUTHORIZED;
+import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1;
+import static org.apache.hadoop.mapred.ShuffleHandler.AUDITLOG;
+import static org.apache.hadoop.mapred.ShuffleHandler.CONNECTION_CLOSE;
+import static org.apache.hadoop.mapred.ShuffleHandler.FETCH_RETRY_DELAY;
+import static org.apache.hadoop.mapred.ShuffleHandler.IGNORABLE_ERROR_MESSAGE;
+import static org.apache.hadoop.mapred.ShuffleHandler.RETRY_AFTER_HEADER;
+import static org.apache.hadoop.mapred.ShuffleHandler.TIMEOUT_HANDLER;
+import static org.apache.hadoop.mapred.ShuffleHandler.TOO_MANY_REQ_STATUS;
+import static org.apache.hadoop.mapred.ShuffleHandler.LOG;
+
+/**
+ * ShuffleChannelHandler verifies the map request then servers the attempts in 
a http stream.
+ * Before each attempt a serialised ShuffleHeader object is written with the 
details.
+ *
+ * 
+ * Example Request
+ * ===
+ * GET /mapOutput?job=job_1_0001reduce=0
+ * map=attempt_1_0001_m_01_0,
+ * attempt_1_0002_m_02_0,
+ * attempt_1_0003_m_03_0 HTTP/1.1
+ * name: mapreduce
+ * version: 1.0.0
+ * UrlHash: 9zS++qE0/7/D2l1Rg0TqRoSguAk=
+ *
+ * Example Response
+ * ===
+ * HTTP/1.1 200 OK
+ * ReplyHash: GcuojWkAxXUyhZHPnwoV/MW2tGA=
+ * name: mapreduce
+ * version: 1.0.0
+ * connection: close
+ * content-length: 138
+ *
+ * 

[jira] [Updated] (HADOOP-18600) Hadoop 2.x should support s3a

2023-01-19 Thread Daniel Carl Jones (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Carl Jones updated HADOOP-18600:
---
Component/s: fs/s3

> Hadoop 2.x should support s3a
> -
>
> Key: HADOOP-18600
> URL: https://issues.apache.org/jira/browse/HADOOP-18600
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common, fs/s3
>Affects Versions: 2.10.2
>Reporter: KaiXinXIaoLei
>Priority: Major
>
> I think the feature about "Add S3A committers for zero-rename commits to S3 
> endpoints" (https://issues.apache.org/jira/browse/HADOOP-13786)  should be 蜜珀 
> in the merged in hadoop 2.10.2. 
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5317: YARN-11420 Stabilize TestNMClient

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5317:
URL: https://github.com/apache/hadoop/pull/5317#issuecomment-1396932613

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 41s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 78 
unchanged - 2 fixed = 78 total (was 80)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client: The patch generated 6 
new + 1 unchanged - 11 fixed = 7 total (was 12)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 23s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-client in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  28m 32s |  |  hadoop-yarn-client in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 129m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5317/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5317 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5e50c317897e 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh 

[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-01-19 Thread GitBox


szilard-nemeth commented on code in PR #5295:
URL: https://github.com/apache/hadoop/pull/5295#discussion_r1081217771


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java:
##
@@ -257,12 +260,6 @@ public void testGetMapCompletionEvents() throws 
IOException {
 createTce(3, false, TaskAttemptCompletionEventStatus.FAILED) };
 TaskAttemptCompletionEvent[] mapEvents = { taskEvents[0], taskEvents[2] };
 Job mockJob = mock(Job.class);
-when(mockJob.getTaskAttemptCompletionEvents(0, 100))
-  .thenReturn(taskEvents);
-when(mockJob.getTaskAttemptCompletionEvents(0, 2))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 0, 2));
-when(mockJob.getTaskAttemptCompletionEvents(2, 100))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 2, 4));

Review Comment:
   sure, this can work :) 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18600) Hadoop 2.x should support s3a

2023-01-19 Thread KaiXinXIaoLei (Jira)
KaiXinXIaoLei created HADOOP-18600:
--

 Summary: Hadoop 2.x should support s3a
 Key: HADOOP-18600
 URL: https://issues.apache.org/jira/browse/HADOOP-18600
 Project: Hadoop Common
  Issue Type: New Feature
  Components: common
Affects Versions: 2.10.2
Reporter: KaiXinXIaoLei


I think the feature about "Add S3A committers for zero-rename commits to S3 
endpoints" (https://issues.apache.org/jira/browse/HADOOP-13786)  should be 蜜珀 
in the merged in hadoop 2.10.2. 

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17912) ABFS: Support for Encryption Context

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678670#comment-17678670
 ] 

ASF GitHub Bot commented on HADOOP-17912:
-

pranavsaxena-microsoft commented on PR #3440:
URL: https://github.com/apache/hadoop/pull/3440#issuecomment-1396908517

   @mukund-thakur , requesting you to kindly review the PR. Thanks.




> ABFS: Support for Encryption Context
> 
>
> Key: HADOOP-17912
> URL: https://issues.apache.org/jira/browse/HADOOP-17912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] pranavsaxena-microsoft commented on pull request #3440: HADOOP-17912. ABFS: Support for Encryption Context

2023-01-19 Thread GitBox


pranavsaxena-microsoft commented on PR #3440:
URL: https://github.com/apache/hadoop/pull/3440#issuecomment-1396908517

   @mukund-thakur , requesting you to kindly review the PR. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5295:
URL: https://github.com/apache/hadoop/pull/5295#issuecomment-1396908472

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 51 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 27s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 37s | 
[/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5295/3/artifact/out/branch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-mapreduce-client-app in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 23s | 
[/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5295/3/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app.txt)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app: 
The patch generated 25 new + 871 unchanged - 78 fixed = 896 total (was 949)  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 22s | 
[/patch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5295/3/artifact/out/patch-javadoc-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-app-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-mapreduce-client-app in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m  2s |  |  hadoop-mapreduce-client-app in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 122m  5s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5295/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5295 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 5b49813e1aae 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 

[GitHub] [hadoop] K0K0V0K commented on a diff in pull request #5311: MAPREDUCE-7431. ShuffleHandler refactor and fix after Netty4 upgrade.

2023-01-19 Thread GitBox


K0K0V0K commented on code in PR #5311:
URL: https://github.com/apache/hadoop/pull/5311#discussion_r1081143805


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/ShuffleChannelHandler.java:
##
@@ -0,0 +1,715 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.mapred;
+
+import io.netty.buffer.ByteBuf;
+import io.netty.buffer.Unpooled;
+import io.netty.channel.Channel;
+import io.netty.channel.ChannelFuture;
+import io.netty.channel.ChannelFutureListener;
+import io.netty.channel.ChannelHandlerContext;
+import io.netty.channel.ChannelPipeline;
+import io.netty.channel.SimpleChannelInboundHandler;
+import io.netty.handler.codec.TooLongFrameException;
+import io.netty.handler.codec.http.DefaultFullHttpResponse;
+import io.netty.handler.codec.http.DefaultHttpResponse;
+import io.netty.handler.codec.http.FullHttpRequest;
+import io.netty.handler.codec.http.FullHttpResponse;
+import io.netty.handler.codec.http.HttpRequest;
+import io.netty.handler.codec.http.HttpResponse;
+import io.netty.handler.codec.http.HttpResponseStatus;
+import io.netty.handler.codec.http.HttpUtil;
+import io.netty.handler.codec.http.LastHttpContent;
+import io.netty.handler.codec.http.QueryStringDecoder;
+import io.netty.handler.ssl.SslHandler;
+import io.netty.util.CharsetUtil;
+
+import java.io.File;
+import java.io.IOException;
+import java.io.RandomAccessFile;
+import java.net.URL;
+import java.nio.channels.ClosedChannelException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutionException;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import javax.crypto.SecretKey;
+
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.DataOutputBuffer;
+import org.apache.hadoop.io.SecureIOUtils;
+import org.apache.hadoop.mapreduce.security.SecureShuffleUtils;
+import org.apache.hadoop.mapreduce.task.reduce.ShuffleHeader;
+import org.apache.hadoop.thirdparty.com.google.common.base.Charsets;
+import org.eclipse.jetty.http.HttpHeader;
+
+import static io.netty.buffer.Unpooled.wrappedBuffer;
+import static io.netty.handler.codec.http.HttpHeaderNames.CONTENT_TYPE;
+import static io.netty.handler.codec.http.HttpMethod.GET;
+import static io.netty.handler.codec.http.HttpResponseStatus.BAD_REQUEST;
+import static io.netty.handler.codec.http.HttpResponseStatus.FORBIDDEN;
+import static 
io.netty.handler.codec.http.HttpResponseStatus.INTERNAL_SERVER_ERROR;
+import static 
io.netty.handler.codec.http.HttpResponseStatus.METHOD_NOT_ALLOWED;
+import static io.netty.handler.codec.http.HttpResponseStatus.OK;
+import static io.netty.handler.codec.http.HttpResponseStatus.UNAUTHORIZED;
+import static io.netty.handler.codec.http.HttpVersion.HTTP_1_1;
+import static org.apache.hadoop.mapred.ShuffleHandler.AUDITLOG;
+import static org.apache.hadoop.mapred.ShuffleHandler.CONNECTION_CLOSE;
+import static org.apache.hadoop.mapred.ShuffleHandler.FETCH_RETRY_DELAY;
+import static org.apache.hadoop.mapred.ShuffleHandler.IGNORABLE_ERROR_MESSAGE;
+import static org.apache.hadoop.mapred.ShuffleHandler.RETRY_AFTER_HEADER;
+import static org.apache.hadoop.mapred.ShuffleHandler.TIMEOUT_HANDLER;
+import static org.apache.hadoop.mapred.ShuffleHandler.TOO_MANY_REQ_STATUS;
+import static org.apache.hadoop.mapred.ShuffleHandler.LOG;
+
+/**
+ * ShuffleChannelHandler verifies the map request then servers the attempts in 
a http stream.
+ * Before each attempt a serialised ShuffleHeader object is written with the 
details.
+ *
+ * 
+ * Example Request
+ * ===
+ * GET /mapOutput?job=job_1_0001reduce=0
+ * map=attempt_1_0001_m_01_0,
+ * attempt_1_0002_m_02_0,
+ * attempt_1_0003_m_03_0 HTTP/1.1
+ * name: mapreduce
+ * version: 1.0.0
+ * UrlHash: 9zS++qE0/7/D2l1Rg0TqRoSguAk=
+ *
+ * Example Response
+ * ===
+ * HTTP/1.1 200 OK
+ * ReplyHash: GcuojWkAxXUyhZHPnwoV/MW2tGA=
+ * name: mapreduce
+ * version: 1.0.0
+ * connection: close
+ * content-length: 138
+ *
+ * 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5311: MAPREDUCE-7431. ShuffleHandler refactor and fix after Netty4 upgrade.

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5311:
URL: https://github.com/apache/hadoop/pull/5311#issuecomment-1396860917

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 13s | 
[/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5311/4/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle:
 The patch generated 1 new + 5 unchanged - 32 fixed = 6 total (was 37)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04
 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3)  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3)  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 36s |  |  hadoop-mapreduce-client-shuffle 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5311/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5311 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 2b5931453a0f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 59a1f409101c47c76bb3078987a4cd41f4f68e63 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
  

[jira] [Commented] (HADOOP-17912) ABFS: Support for Encryption Context

2023-01-19 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678656#comment-17678656
 ] 

Steve Loughran commented on HADOOP-17912:
-

I'm on vacation from this afternoon until feb. can you ask [~mthakur] and 
[~mehakmeet]. thanks

> ABFS: Support for Encryption Context
> 
>
> Key: HADOOP-17912
> URL: https://issues.apache.org/jira/browse/HADOOP-17912
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Support for customer-provided encryption keys at the file level, superceding 
> the global (account-level) key use in HADOOP-17536.
> ABFS driver will support an "EncryptionContext" plugin for retrieving 
> encryption information, the implementation for which should be provided by 
> the client. The keys/context retrieved will be sent via request headers to 
> the server, which will store the encryption context. Subsequent REST calls to 
> server that access data/user metadata of the file will require fetching the 
> encryption context through a GetFileProperties call and retrieving the key 
> from the custom provider, before sending the request.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5312: YARN-11375. [Federation] Support refreshAdminAcls、refreshServiceAcls API's for Federation.

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5312:
URL: https://github.com/apache/hadoop/pull/5312#issuecomment-1396832513

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 58s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m  3s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  cc  |  10m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  10m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  cc  |   9m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  
hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 0 unchanged - 9 
fixed = 0 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   2m 32s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   5m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  9s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 35s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 43s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 178m 56s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5312 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux 4125bca5d9b8 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 718451e00b5342dda381d88ff6372d55be935921 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/2/testReport/ |
   | Max. process+thread count | 758 (vs. 

[jira] [Updated] (HADOOP-16209) Create simple docker based pseudo-cluster for hdfs

2023-01-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-16209:

Labels: pull-request-available  (was: )

> Create simple docker based pseudo-cluster for hdfs
> --
>
> Key: HADOOP-16209
> URL: https://issues.apache.org/jira/browse/HADOOP-16209
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>  Labels: pull-request-available
>
> As defined in HADOOP-16063 we can provide a simple docker composed based 
> definition to start a local pseudo cluster.
> This could be useful for e.g release testing, and integration testing with 
> other components.
> We could easily start a hdfs cluster with this for a sanity check.
> Docker compose files can be part of the standard distribution package in the 
> future.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16209) Create simple docker based pseudo-cluster for hdfs

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678642#comment-17678642
 ] 

ASF GitHub Bot commented on HADOOP-16209:
-

hadoop-yetus commented on PR #650:
URL: https://github.com/apache/hadoop/pull/650#issuecomment-1396825079

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  
https://github.com/apache/hadoop/pull/650 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/650 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-650/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Create simple docker based pseudo-cluster for hdfs
> --
>
> Key: HADOOP-16209
> URL: https://issues.apache.org/jira/browse/HADOOP-16209
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
>
> As defined in HADOOP-16063 we can provide a simple docker composed based 
> definition to start a local pseudo cluster.
> This could be useful for e.g release testing, and integration testing with 
> other components.
> We could easily start a hdfs cluster with this for a sanity check.
> Docker compose files can be part of the standard distribution package in the 
> future.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #650: HADOOP-16209. Create simple docker based pseudo-cluster for hdfs

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #650:
URL: https://github.com/apache/hadoop/pull/650#issuecomment-1396825079

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  
https://github.com/apache/hadoop/pull/650 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/650 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-650/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] krishan1390 closed pull request #5314: Yarn-11411 Concurrent user management

2023-01-19 Thread GitBox


krishan1390 closed pull request #5314: Yarn-11411 Concurrent user management
URL: https://github.com/apache/hadoop/pull/5314


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] krishan1390 closed pull request #5313: YARN-11411 Encapsulating certain User Manager APIs by making it private

2023-01-19 Thread GitBox


krishan1390 closed pull request #5313: YARN-11411 Encapsulating certain User 
Manager APIs by making it private
URL: https://github.com/apache/hadoop/pull/5313


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] krishan1390 opened a new pull request, #5318: YARN-11412 Concurrent user management

2023-01-19 Thread GitBox


krishan1390 opened a new pull request, #5318:
URL: https://github.com/apache/hadoop/pull/5318

   JIRA: [YARN-11412](https://issues.apache.org/jira/browse/YARN-11412).
   
   Create a Concurrent Users Manager to enable thread safe concurrent resource 
usage tracking of user
   
   Low level design doc - 
https://docs.google.com/document/d/1czUh2XU3_X_eRIJAsSM40hRuHYSHw4ymyfky39dIj4s/edit


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K opened a new pull request, #5317: YARN-11420 Stabilize TestNMClient

2023-01-19 Thread GitBox


K0K0V0K opened a new pull request, #5317:
URL: https://github.com/apache/hadoop/pull/5317

   ### Description of PR
   The TestNMClient test methods can stuck if the test container fails, while 
the test is expecting it running state. This can happen for example if the 
container fails due low memory. To fix this the test should tolerate some 
failure like this.
   
   ### How was this patch tested?
   I run the test ~400 times in a row with zero failure.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K closed pull request #5286: YARN-11410. Add default methods for StateMachine

2023-01-19 Thread GitBox


K0K0V0K closed pull request #5286: YARN-11410. Add default methods for 
StateMachine
URL: https://github.com/apache/hadoop/pull/5286


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K closed pull request #5316: Yarn 11420

2023-01-19 Thread GitBox


K0K0V0K closed pull request #5316: Yarn 11420
URL: https://github.com/apache/hadoop/pull/5316


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] K0K0V0K opened a new pull request, #5316: Yarn 11420

2023-01-19 Thread GitBox


K0K0V0K opened a new pull request, #5316:
URL: https://github.com/apache/hadoop/pull/5316

   ### Description of PR
   
   The TestNMClient test methods can stuck if the test container fails, while 
the test is expecting it running state. This can happen for example if the 
container fails due low memory. To fix this the test should tolerate some 
failure like this.
   
   ### How was this patch tested?
   
   I run the test ~400 times in a row with zero failer
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5311: MAPREDUCE-7431. ShuffleHandler refactor and fix after Netty4 upgrade.

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5311:
URL: https://github.com/apache/hadoop/pull/5311#issuecomment-1396671969

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  45m  0s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5311/3/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5311/3/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle.txt)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle:
 The patch generated 1 new + 5 unchanged - 32 fixed = 6 total (was 37)  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04
 with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3)  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  
hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-shuffle-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08
 with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 0 new + 0 
unchanged - 3 fixed = 0 total (was 3)  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 37s |  |  hadoop-mapreduce-client-shuffle 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 110m 42s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5311/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5311 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 5c70fe9bae8f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6c572c8c9c7f67117069c7c67228a608dd24021a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 

[GitHub] [hadoop] susheel-gupta commented on a diff in pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-01-19 Thread GitBox


susheel-gupta commented on code in PR #5295:
URL: https://github.com/apache/hadoop/pull/5295#discussion_r1080982699


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java:
##
@@ -257,12 +260,6 @@ public void testGetMapCompletionEvents() throws 
IOException {
 createTce(3, false, TaskAttemptCompletionEventStatus.FAILED) };
 TaskAttemptCompletionEvent[] mapEvents = { taskEvents[0], taskEvents[2] };
 Job mockJob = mock(Job.class);
-when(mockJob.getTaskAttemptCompletionEvents(0, 100))
-  .thenReturn(taskEvents);
-when(mockJob.getTaskAttemptCompletionEvents(0, 2))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 0, 2));
-when(mockJob.getTaskAttemptCompletionEvents(2, 100))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 2, 4));

Review Comment:
   To bypass strict stubs we use lenient strictness, as in new upgrade mockito 
is introducing new features that nudge the framework towards “strictness”.
   I don't see any advantage here in this scenario, other than keeping the code 
as it is.
   Disadvantages are test code duplication and unnecessary test code will be 
there if we use this 'lenient' keyword.
   Yes the test works fine without it. 
   So I'm removing this lines.
   ```
   when(mockJob.getTaskAttemptCompletionEvents(0, 100))
 .thenReturn(taskEvents);
   when(mockJob.getTaskAttemptCompletionEvents(0, 2))
 .thenReturn(Arrays.copyOfRange(taskEvents, 0, 2));
   when(mockJob.getTaskAttemptCompletionEvents(2, 100))
 .thenReturn(Arrays.copyOfRange(taskEvents, 2, 4));
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17717) Update wildfly openssl to 1.1.3.Final

2023-01-19 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17678569#comment-17678569
 ] 

ASF GitHub Bot commented on HADOOP-17717:
-

hadoop-yetus commented on PR #5310:
URL: https://github.com/apache/hadoop/pull/5310#issuecomment-1396645743

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  18m 55s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 22s | 
[/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   7m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  31m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  27m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  18m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | -1 :x: |  javadoc  |   1m 11s | 
[/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   7m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 851m 40s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1120m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.mapreduce.v2.app.TestRuntimeEstimators |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5310 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux d85e870f0660 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #5310: HADOOP-17717. Update wildfly openssl to 1.1.3.Final

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5310:
URL: https://github.com/apache/hadoop/pull/5310#issuecomment-1396645743

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 11s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 17s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  mvnsite  |  18m 55s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   1m 22s | 
[/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/branch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   7m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  31m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 31s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  27m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 37s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 37s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |  18m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | -1 :x: |  javadoc  |   1m 11s | 
[/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/patch-javadoc-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   7m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 851m 40s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 1120m  8s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.mapreduce.v2.app.TestRuntimeEstimators |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5310/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5310 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs |
   | uname | Linux d85e870f0660 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2aa503aa8ea2705d458e52452ae94321034c8852 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 

[GitHub] [hadoop] szilard-nemeth commented on a diff in pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-01-19 Thread GitBox


szilard-nemeth commented on code in PR #5295:
URL: https://github.com/apache/hadoop/pull/5295#discussion_r1080950958


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/src/test/java/org/apache/hadoop/mapred/TestTaskAttemptListenerImpl.java:
##
@@ -257,12 +260,6 @@ public void testGetMapCompletionEvents() throws 
IOException {
 createTce(3, false, TaskAttemptCompletionEventStatus.FAILED) };
 TaskAttemptCompletionEvent[] mapEvents = { taskEvents[0], taskEvents[2] };
 Job mockJob = mock(Job.class);
-when(mockJob.getTaskAttemptCompletionEvents(0, 100))
-  .thenReturn(taskEvents);
-when(mockJob.getTaskAttemptCompletionEvents(0, 2))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 0, 2));
-when(mockJob.getTaskAttemptCompletionEvents(2, 100))
-  .thenReturn(Arrays.copyOfRange(taskEvents, 2, 4));

Review Comment:
   Hi @susheel-gupta ,
   What is lenient strictness? 
   Can you explain what's the advantage / disadvantage of it? 
   If the stub call is really unnecessary and the test works fine without it, 
you can remove it. 
   With my original comment I Jjst wanted to make sure the patch is focused to 
JUnit upgrade but in this case we can make an exception if it's necessary.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5314: Yarn-11411 Concurrent user management

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5314:
URL: https://github.com/apache/hadoop/pull/5314#issuecomment-1396610842

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 10 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 58s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5314/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 45s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5314/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 123 new + 602 unchanged - 48 fixed = 725 total (was 650)  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 38s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5314/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | -1 :x: |  javadoc  |   0m 36s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5314/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | +1 :green_heart: |  spotbugs  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 103m 20s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5313: YARN-11411 Encapsulating certain User Manager APIs by making it private

2023-01-19 Thread GitBox


hadoop-yetus commented on PR #5313:
URL: https://github.com/apache/hadoop/pull/5313#issuecomment-1396596316

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 5 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  5s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 57s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5313/1/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 42s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5313/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 47 new + 303 unchanged - 47 fixed = 350 total (was 350)  |
   | +1 :green_heart: |  mvnsite  |   0m 53s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 40s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5313/1/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 101m 20s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5313/1/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 25s |  |  ASF License check generated no 
output?  |
   |  |   | 210m 40s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.monitor.capacity.TestProportionalCapacityPreemptionPolicyIntraQueueFairOrdering
 |
   |   |