[GitHub] [hadoop] hadoop-yetus commented on pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#issuecomment-1571402556

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 42s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   5m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 57s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   5m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   5m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  9s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/5/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 1 unchanged - 0 fixed = 2 
total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 18s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 226m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 371m 59s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5700 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3af1994e7412 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15a350d555a287f8a70a3438d73800a47efcf92e |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 

[jira] [Commented] (HADOOP-18740) s3a prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728215#comment-17728215
 ] 

ASF GitHub Bot commented on HADOOP-18740:
-

hadoop-yetus commented on PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#issuecomment-1571346917

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  15m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  14m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  14m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  14m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 32s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 174m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5675/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5675 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9432f0056538 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 710441f19ca08701a4e0f5c32fa77f3ff7ce9736 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5675/2/testReport/ |
   | Max. process+thread count | 1783 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5675/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#issuecomment-1571346917

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  15m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  14m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  14m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  14m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  14m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 32s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  2s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 174m 48s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5675/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5675 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9432f0056538 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 710441f19ca08701a4e0f5c32fa77f3ff7ce9736 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5675/2/testReport/ |
   | Max. process+thread count | 1783 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5675/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use 

[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5636: YARN-11492. Improve createJerseyClient#setConnectTimeout Code.

2023-05-31 Thread via GitHub


ayushtkn commented on code in PR #5636:
URL: https://github.com/apache/hadoop/pull/5636#discussion_r1212556425


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/test/java/org/apache/hadoop/yarn/server/router/webapp/TestRouterWebServiceUtil.java:
##
@@ -678,4 +683,83 @@ public void testMergeDiffApplicationStatisticsInfo() {
 Assert.assertEquals(YarnApplicationState.FINISHED, item3Result.getState());
 Assert.assertEquals(item4.getCount(), item3Result.getCount());
   }
+
+  @Test
+  public void testCreateJerseyClient() {
+// Case1,  default timeout, The default timeout is 30s.
+YarnConfiguration configuration = new YarnConfiguration();
+Client client01 = RouterWebServiceUtil.createJerseyClient(configuration);
+Map properties = client01.getProperties();
+int readTimeOut = (int) properties.get(ClientConfig.PROPERTY_READ_TIMEOUT);
+int connectTimeOut = (int) 
properties.get(ClientConfig.PROPERTY_CONNECT_TIMEOUT);
+Assert.assertEquals(3, readTimeOut);
+Assert.assertEquals(3, connectTimeOut);
+client01.destroy();
+
+// Case2, set a negative timeout, We'll get the default timeout(30s)
+YarnConfiguration configuration2 = new YarnConfiguration();
+configuration2.setLong(YarnConfiguration.ROUTER_WEBAPP_CONNECT_TIMEOUT, 
-1L);
+configuration2.setLong(YarnConfiguration.ROUTER_WEBAPP_READ_TIMEOUT, -1L);
+Client client02 = RouterWebServiceUtil.createJerseyClient(configuration2);
+Map properties02 = client02.getProperties();
+int readTimeOut02 = (int) 
properties02.get(ClientConfig.PROPERTY_READ_TIMEOUT);
+int connectTimeOut02 =  (int) 
properties02.get(ClientConfig.PROPERTY_CONNECT_TIMEOUT);
+Assert.assertEquals(3, readTimeOut02);
+Assert.assertEquals(3, connectTimeOut02);
+client02.destroy();
+
+// Case3, Set the maximum value that exceeds the integer
+// We'll get the default timeout(30s)
+YarnConfiguration configuration3 = new YarnConfiguration();
+long connectTimeOutLong = (long) Integer.MAX_VALUE + 1;
+long readTimeOutLong = (long) Integer.MAX_VALUE + 1;

Review Comment:
   I think the correct approach would be first get the long, figure out whether 
it is -ve or bigger than Integer.MAX and if so put a log and use the default.
   If it is within limit, then may be we can do a cast



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18740) s3a prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728210#comment-17728210
 ] 

ASF GitHub Bot commented on HADOOP-18740:
-

virajjasani commented on PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#issuecomment-1571320810

   `us-west-2`:
   
   ```
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   ```
   
   Test results are clean




> s3a prefetch cache blocks should be accessed by RW locks
> 
>
> Key: HADOOP-18740
> URL: https://issues.apache.org/jira/browse/HADOOP-18740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> In order to implement LRU or LFU based cache removal policies for s3a 
> prefetched cache blocks, it is important for all cache reader threads to 
> acquire read lock and similarly cache file removal mechanism (fs close or 
> cache eviction) to acquire write lock before accessing the files.
> As we maintain the block entries in an in-memory map, we should be able to 
> introduce read-write lock per cache file entry, we don't need coarse-grained 
> lock shared by all entries.
>  
> This is a prerequisite to HADOOP-18291.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread via GitHub


virajjasani commented on PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#issuecomment-1571320810

   `us-west-2`:
   
   ```
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale
   
   mvn clean verify -Dparallel-tests -DtestsThreadCount=8 -Dscale -Dprefetch
   ```
   
   Test results are clean


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn merged pull request #5701: HDFS-17031. RBF: Reduce repeated code in RouterRpcServer.

2023-05-31 Thread via GitHub


ayushtkn merged PR #5701:
URL: https://github.com/apache/hadoop/pull/5701


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5671: HDFS-17019. Optimize the logic for reconfigure slow peer enable for Namenode"

2023-05-31 Thread via GitHub


ayushtkn commented on code in PR #5671:
URL: https://github.com/apache/hadoop/pull/5671#discussion_r1212512606


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java:
##
@@ -2270,4 +2277,9 @@ public void setMaxSlowPeersToReport(int 
maxSlowPeersToReport) {
 Preconditions.checkNotNull(slowPeerTracker, "slowPeerTracker should not be 
un-assigned");
 slowPeerTracker.setMaxSlowPeersToReport(maxSlowPeersToReport);
   }
+
+  @VisibleForTesting
+  public boolean isSlowPeerCollectorDaemonNull() {

Review Comment:
   can you change to ``isSlowPeerCollectorInitialized``?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn merged pull request #5697: HDFS-16996. Fix flaky testFsCloseAfterClusterShutdown in TestFileCrea…

2023-05-31 Thread via GitHub


ayushtkn merged PR #5697:
URL: https://github.com/apache/hadoop/pull/5697


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18740) s3a prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728181#comment-17728181
 ] 

ASF GitHub Bot commented on HADOOP-18740:
-

virajjasani commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1212472590


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -268,12 +310,15 @@ public void close() throws IOException {
 int numFilesDeleted = 0;
 
 for (Entry entry : blocks.values()) {
+  entry.takeLock(Entry.LockType.WRITE);

Review Comment:
   > also, L303: should closed be atomic?
   
   +1 to this suggestion, let me create a separate patch with HADOOP-18756 to 
better track it.
   
   > good: no race condition in close
   > bad) the usual
   
   sounds reasonable, let me try setting timeout





> s3a prefetch cache blocks should be accessed by RW locks
> 
>
> Key: HADOOP-18740
> URL: https://issues.apache.org/jira/browse/HADOOP-18740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> In order to implement LRU or LFU based cache removal policies for s3a 
> prefetched cache blocks, it is important for all cache reader threads to 
> acquire read lock and similarly cache file removal mechanism (fs close or 
> cache eviction) to acquire write lock before accessing the files.
> As we maintain the block entries in an in-memory map, we should be able to 
> introduce read-write lock per cache file entry, we don't need coarse-grained 
> lock shared by all entries.
>  
> This is a prerequisite to HADOOP-18291.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread via GitHub


virajjasani commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1212472590


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -268,12 +310,15 @@ public void close() throws IOException {
 int numFilesDeleted = 0;
 
 for (Entry entry : blocks.values()) {
+  entry.takeLock(Entry.LockType.WRITE);

Review Comment:
   > also, L303: should closed be atomic?
   
   +1 to this suggestion, let me create a separate patch with HADOOP-18756 to 
better track it.
   
   > good: no race condition in close
   > bad) the usual
   
   sounds reasonable, let me try setting timeout



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18756) CachingBlockManager to use AtomicBoolean for closed flag

2023-05-31 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HADOOP-18756:
-

Assignee: Viraj Jasani

> CachingBlockManager to use AtomicBoolean for closed flag
> 
>
> Key: HADOOP-18756
> URL: https://issues.apache.org/jira/browse/HADOOP-18756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Assignee: Viraj Jasani
>Priority: Major
>
> the {{CachingBlockManager}} uses the boolean field {{closed)) in various 
> operations, including a do/while loop. to ensure the flag is correctly 
> updated across threads, it needs to move to an atomic boolean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


xinglin commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1212424086


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestObserverReadProxyProvider.java:
##
@@ -325,6 +357,94 @@ public void testObserverRetriableException() throws 
Exception {
 assertHandledBy(1);
   }
 
+  /**
+   * Happy case for GetHAServiceStateWithTimeout.
+   */
+  @Test
+  public void testGetHAServiceStateWithTimeout() throws Exception {
+setupProxyProvider(1);
+final HAServiceState state = HAServiceState.STANDBY;
+NNProxyInfo dummyNNProxyInfo =
+(NNProxyInfo) mock(NNProxyInfo.class);
+Future task = mock(Future.class);
+when(task.get(anyLong(), any(TimeUnit.class))).thenReturn(state);
+
+HAServiceState state2 =
+proxyProvider.getHAServiceStateWithTimeout(dummyNNProxyInfo, task);
+assertEquals(state, state2);
+verify(task).get(anyLong(), any(TimeUnit.class));
+verifyNoMoreInteractions(task);
+verify(logger).debug(startsWith("HA State for"));
+  }
+
+  /**
+   * Test TimeoutException for GetHAServiceStateWithTimeout.
+   */
+  @Test
+  public void testTimeoutExceptionGetHAServiceStateWithTimeout()
+  throws Exception {
+setupProxyProvider(1);
+NNProxyInfo dummyNNProxyInfo =
+(NNProxyInfo) Mockito.mock(NNProxyInfo.class);
+Future task = mock(Future.class);
+when(task.get(anyLong(), any(TimeUnit.class))).thenThrow(
+new TimeoutException("Timeout"));
+
+HAServiceState state =
+proxyProvider.getHAServiceStateWithTimeout(dummyNNProxyInfo, task);
+assertNull(state);
+verify(task).get(anyLong(), any(TimeUnit.class));
+verify(task).cancel(true);
+verifyNoMoreInteractions(task);
+verify(logger).debug(startsWith("Cancel NN probe task due to timeout 
for"));
+  }
+
+  /**
+   * Test InterruptedException for GetHAServiceStateWithTimeout.
+   * Tests for the other two exceptions are the same and thus left out.
+   */
+  @Test
+  public void testInterruptedExceptionGetHAServiceStateWithTimeout()
+  throws Exception {
+setupProxyProvider(1);
+NNProxyInfo dummyNNProxyInfo =
+(NNProxyInfo) Mockito.mock(NNProxyInfo.class);
+Future task = mock(Future.class);
+when(task.get(anyLong(), any(TimeUnit.class))).thenThrow(
+new InterruptedException("Interrupted"));
+
+HAServiceState state =
+proxyProvider.getHAServiceStateWithTimeout(dummyNNProxyInfo, task);
+assertNull(state);
+verify(task).get(anyLong(), any(TimeUnit.class));
+verifyNoMoreInteractions(task);
+verify(logger).debug(
+startsWith("Interrupted exception in NN probe task for"));
+  }
+
+  /**
+   * Test InterruptedException for GetHAServiceStateWithTimeout.
+   * Tests for the other two exceptions are the same and thus left out.

Review Comment:
   outdated comments. removed.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


xinglin commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1212423874


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -284,13 +319,68 @@ private synchronized NNProxyInfo 
changeProxy(NNProxyInfo initial) {
 }
 currentIndex = (currentIndex + 1) % nameNodeProxies.size();
 currentProxy = createProxyIfNeeded(nameNodeProxies.get(currentIndex));
-currentProxy.setCachedState(getHAServiceState(currentProxy));
+currentProxy.setCachedState(getHAServiceStateWithTimeout(currentProxy));
 LOG.debug("Changed current proxy from {} to {}",
 initial == null ? "none" : initial.proxyInfo,
 currentProxy.proxyInfo);
 return currentProxy;
   }
 
+  /**
+   * Execute getHAServiceState() call with a timeout, to avoid a long wait when
+   * an NN becomes irresponsive to rpc requests
+   * (when a thread/heap dump is being taken, e.g.).
+   *
+   * For each getHAServiceState() call, a task is created and submitted to a
+   * threadpool for execution. We will wait for a response up to
+   * namenodeHAStateProbeTimeoutSec and cancel these requests if they time out.
+   *
+   * The implemention is split into two functions so that we can unit test
+   * the second function.
+   */
+  HAServiceState getHAServiceStateWithTimeout(final NNProxyInfo proxyInfo) {
+
+Callable getHAServiceStateTask =
+new Callable() {
+  @Override
+  public HAServiceState call() {
+return getHAServiceState(proxyInfo);
+  }
+};

Review Comment:
   thanks for providing the code. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


xinglin commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1212423619


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -213,6 +241,13 @@ public ObserverReadProxyProvider(
 observerProbeRetryPeriodMs = conf.getTimeDuration(
 OBSERVER_PROBE_RETRY_PERIOD_KEY,
 OBSERVER_PROBE_RETRY_PERIOD_DEFAULT, TimeUnit.MILLISECONDS);
+namenodeHAStateProbeTimeoutSec = conf.getTimeDuration(
+NAMENODE_HA_STATE_PROBE_TIMEOUT,
+NAMENODE_HA_STATE_PROBE_TIMEOUT_DEFAULT, TimeUnit.SECONDS);
+// Disallow negative values for namenodeHAStateProbeTimeoutSec
+if (namenodeHAStateProbeTimeoutSec < 0) {
+  namenodeHAStateProbeTimeoutSec = NAMENODE_HA_STATE_PROBE_TIMEOUT_DEFAULT;
+}

Review Comment:
   It is unfortunate that Future in Java uses two separate APIs when waiting to 
get the result: get(timeout) and get(), which waits without a timeout. 
   
   Changed the code with an if case, to support the case where   
namenodeHAStateProbeTimeoutSec is less or equal to 0 (no timeout).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


xinglin commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1212422395


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -88,6 +96,16 @@
   /** Observer probe retry period default to 10 min. */
   static final long OBSERVER_PROBE_RETRY_PERIOD_DEFAULT = 60 * 10 * 1000;
 
+  /** Timeout to cancel the ha-state probe rpc request for an namenode. */
+  static final String NAMENODE_HA_STATE_PROBE_TIMEOUT =
+  "dfs.client.namenode.ha-state.probe.timeout";
+  /**
+   * Namenode ha-state probe timeout default to 25 sec.
+   * ipc.client.connect.timeout defaults to be 20 seconds. So, in 25 seconds,
+   * we can try twice to connect to an NN.
+   */
+  static final long NAMENODE_HA_STATE_PROBE_TIMEOUT_DEFAULT = 25;

Review Comment:
   not sure how to address this. I don't think that is required neither. In 
many places, we use an integer/long value together with TimeUnit, to refer a 
duration. It is the same in this case. autoMsyncPeriodMs in this class is also 
a long.
   
 `private final long autoMsyncPeriodMs;`
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xinglin commented on a diff in pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


xinglin commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1212421364


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -155,12 +173,22 @@
*/
   private long observerProbeRetryPeriodMs;
 
+  /**
+   * Timeout in seconds when we try to get the HA state of an namenode.
+   */
+  @VisibleForTesting
+  private long namenodeHAStateProbeTimeoutSec;
+
   /**
* The previous time where zero observer were found. If there was observer,
* or it is initialization, this is set to 0.
*/
   private long lastObserverProbeTime;
 
+  private final ExecutorService nnProbingThreadPool =
+  new ThreadPoolExecutor(1, 4, 6L, TimeUnit.MILLISECONDS,

Review Comment:
   replaced 6L/MS with 1L/S. Hopefully, it is more intuitive. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3618: YARN-11000. Replace queue resource calculation logic in updateClusterResource

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #3618:
URL: https://github.com/apache/hadoop/pull/3618#issuecomment-1571060839

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  jsonlint  |   0m  0s |  |  jsonlint was not available.  |
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 72 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 46s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 51s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3618/18/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 125 new + 1158 unchanged - 6 fixed = 1283 total (was 1164) 
 |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 602m 57s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3618/18/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 707m 38s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServicesCapacitySchedDynamicConfig
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestApplicationLimitsByPartition
 |
   |   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestLeafQueue |
   |   | hadoop.yarn.server.resourcemanager.TestKillApplicationWithRMHA |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3618/18/artifact/out/Dockerfile
 |
   | GITHUB PR | 

[jira] [Commented] (HADOOP-18752) Change fs.s3a.directory.marker.retention to "keep"

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728122#comment-17728122
 ] 

ASF GitHub Bot commented on HADOOP-18752:
-

steveloughran commented on PR #5689:
URL: https://github.com/apache/hadoop/pull/5689#issuecomment-1570827515

   noted. well, let's target 3.4 at the very least and tag as incompatible




> Change fs.s3a.directory.marker.retention to "keep"
> --
>
> Key: HADOOP-18752
> URL: https://issues.apache.org/jira/browse/HADOOP-18752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> Change the default value of "fs.s3a.directory.marker.retention" to keep; 
> update docs to match.
> maybe include with HADOOP-17802 so we don't blow up with fewer markers being 
> created.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5689: HADOOP-18752. Change fs.s3a.directory.marker.retention to "keep"

2023-05-31 Thread via GitHub


steveloughran commented on PR #5689:
URL: https://github.com/apache/hadoop/pull/5689#issuecomment-1570827515

   noted. well, let's target 3.4 at the very least and tag as incompatible


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18707) Cannot write to Azure Datalake Gen2 (abfs/abfss) after Spark 3.1.2

2023-05-31 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728118#comment-17728118
 ] 

Steve Loughran commented on HADOOP-18707:
-

good to hear this is working.
# could you also try settings hadoop.tmp.dir  to something else just to see if 
that makes it go away too?
#. storediag should actually attempt to create a file in the temp dir. did that 
work?

> Cannot write to Azure Datalake Gen2 (abfs/abfss) after Spark 3.1.2
> --
>
> Key: HADOOP-18707
> URL: https://issues.apache.org/jira/browse/HADOOP-18707
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.2, 3.3.5, 3.3.4
>Reporter: Nicolas PHUNG
>Priority: Major
> Fix For: 3.3.4
>
>
> Hello,
> I have an issue with Spark 3.3.2 & Spark 3.4.0 to write into Azure Data Lake 
> Storage Gen2 (abfs/abfss scheme). I've got the following errors:
> {code:java}
> warn 13:12:47.554: StdErr from Kernel Process 23/04/19 13:12:47 ERROR 
> FileFormatWriter: Aborting job 
> 6a75949c-1473-4445-b8ab-d125be3f0f21.org.apache.spark.SparkException: Job 
> aborted due to stage failure: Task 1 in stage 0.0 failed 1 times, most recent 
> failure: Lost task 1.0 in stage 0.0 (TID 1) (myhost executor driver): 
> org.apache.hadoop.util.DiskChecker$DiskErrorException: Could not find any 
> valid local directory for datablock-0001-    at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:462)
>     at 
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:165)
>     at 
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:146)
>     at 
> org.apache.hadoop.fs.store.DataBlocks$DiskBlockFactory.createTmpFileForWrite(DataBlocks.java:980)
>     at 
> org.apache.hadoop.fs.store.DataBlocks$DiskBlockFactory.create(DataBlocks.java:960)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.createBlockIfNeeded(AbfsOutputStream.java:262)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsOutputStream.(AbfsOutputStream.java:173)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.createFile(AzureBlobFileSystemStore.java:580)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.create(AzureBlobFileSystem.java:301)
>     at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1195)    at 
> org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1175)    at 
> org.apache.parquet.hadoop.util.HadoopOutputFile.create(HadoopOutputFile.java:74)
>     at 
> org.apache.parquet.hadoop.ParquetFileWriter.(ParquetFileWriter.java:347)
>     at 
> org.apache.parquet.hadoop.ParquetFileWriter.(ParquetFileWriter.java:314)
>     at 
> org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:480)
>     at 
> org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:420)
>     at 
> org.apache.parquet.hadoop.ParquetOutputFormat.getRecordWriter(ParquetOutputFormat.java:409)
>     at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.(ParquetOutputWriter.scala:36)
>     at 
> org.apache.spark.sql.execution.datasources.parquet.ParquetUtils$$anon$1.newInstance(ParquetUtils.scala:490)
>     at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.newOutputWriter(FileFormatDataWriter.scala:161)
>     at 
> org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.(FileFormatDataWriter.scala:146)
>     at 
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:389)
>     at 
> org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:100)
>     at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:888)    
> at 
> org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:888)
>     at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)    
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)    at 
> org.apache.spark.rdd.RDD.iterator(RDD.scala:328)    at 
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)    at 
> org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)    
> at org.apache.spark.scheduler.Task.run(Task.scala:139)    at 
> org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
>     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)    
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)    
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>    

[jira] [Commented] (HADOOP-18754) Make DistCp Split files hidden to prevent interfering with query engines

2023-05-31 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17728117#comment-17728117
 ] 

Steve Loughran commented on HADOOP-18754:
-

can you tag with affected and targeted fix versions. 
be aware we are very scared of incompatible changes to distcp, so good tests 
matter.

> Make DistCp Split files hidden to prevent interfering with query engines 
> -
>
> Key: HADOOP-18754
> URL: https://issues.apache.org/jira/browse/HADOOP-18754
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp, util
>Reporter: Chayanika Bhandary
>Priority: Major
>
> As of today, the distcp split files are not hidden, and hence could interfere 
> with query engines. Making them hidden. 
>  
> Contributing a fix to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18754) Make DistCp Split files hidden to prevent interfering with query engines

2023-05-31 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18754:

Component/s: tools/distcp

> Make DistCp Split files hidden to prevent interfering with query engines 
> -
>
> Key: HADOOP-18754
> URL: https://issues.apache.org/jira/browse/HADOOP-18754
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp, util
>Reporter: Chayanika Bhandary
>Priority: Major
>
> As of today, the distcp split files are not hidden, and hence could interfere 
> with query engines. Making them hidden. 
>  
> Contributing a fix to it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#issuecomment-1570773154

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 46s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 221m 15s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 334m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5561/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5561 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 52d023eabc87 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / dca2148c3e8deacd610962a90525147bfa112ae4 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5561/10/testReport/ |
   | Max. process+thread count | 2148 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5561/10/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

[GitHub] [hadoop] goiri commented on a diff in pull request #5705: YARN-11502. Refactor AMRMProxy#FederationInterceptor#registerApplicationMaster.

2023-05-31 Thread via GitHub


goiri commented on code in PR #5705:
URL: https://github.com/apache/hadoop/pull/5705#discussion_r1212058864


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/amrmproxy/FederationInterceptor.java:
##
@@ -590,18 +590,9 @@ private Map> 
recoverSubClusterAMRMTokenIdenti
   // Save the registration request. This will be used for registering with
   // secondary sub-clusters using UAMs, as well as re-register later
   this.amRegistrationRequest = request;
-  if (getNMStateStore() != null) {
-try {
-  RegisterApplicationMasterRequestPBImpl pb =
-  (RegisterApplicationMasterRequestPBImpl)
-  this.amRegistrationRequest;
-  getNMStateStore().storeAMRMProxyAppContextEntry(this.attemptId,
-  NMSS_REG_REQUEST_KEY, pb.getProto().toByteArray());
-} catch (Exception e) {
-  LOG.error("Error storing AMRMProxy application context entry for "
-  + this.attemptId, e);
-}
-  }
+  RegisterApplicationMasterRequestPBImpl requestPB = 
(RegisterApplicationMasterRequestPBImpl)

Review Comment:
   How are we sure this is all initialized now?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] simbadzina commented on a diff in pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


simbadzina commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1212029782


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -155,12 +173,22 @@
*/
   private long observerProbeRetryPeriodMs;
 
+  /**
+   * Timeout in seconds when we try to get the HA state of an namenode.

Review Comment:
   Nit : `a namenode`



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -88,6 +96,16 @@
   /** Observer probe retry period default to 10 min. */
   static final long OBSERVER_PROBE_RETRY_PERIOD_DEFAULT = 60 * 10 * 1000;
 
+  /** Timeout to cancel the ha-state probe rpc request for an namenode. */
+  static final String NAMENODE_HA_STATE_PROBE_TIMEOUT =
+  "dfs.client.namenode.ha-state.probe.timeout";

Review Comment:
   Can you use `HdfsClientConfigKeys.Failover.PREFIX` as the prefix for this 
config.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -213,6 +241,13 @@ public ObserverReadProxyProvider(
 observerProbeRetryPeriodMs = conf.getTimeDuration(
 OBSERVER_PROBE_RETRY_PERIOD_KEY,
 OBSERVER_PROBE_RETRY_PERIOD_DEFAULT, TimeUnit.MILLISECONDS);
+namenodeHAStateProbeTimeoutSec = conf.getTimeDuration(
+NAMENODE_HA_STATE_PROBE_TIMEOUT,
+NAMENODE_HA_STATE_PROBE_TIMEOUT_DEFAULT, TimeUnit.SECONDS);
+// Disallow negative values for namenodeHAStateProbeTimeoutSec
+if (namenodeHAStateProbeTimeoutSec < 0) {
+  namenodeHAStateProbeTimeoutSec = NAMENODE_HA_STATE_PROBE_TIMEOUT_DEFAULT;
+}

Review Comment:
   Can you make a negative value mean no timeout instead. That's the typical 
meaning of a negative value in other configs.
   
   If you really wants to enforce having a timeout, then it may be better to 
log and throw and error here.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -284,13 +319,68 @@ private synchronized NNProxyInfo 
changeProxy(NNProxyInfo initial) {
 }
 currentIndex = (currentIndex + 1) % nameNodeProxies.size();
 currentProxy = createProxyIfNeeded(nameNodeProxies.get(currentIndex));
-currentProxy.setCachedState(getHAServiceState(currentProxy));
+currentProxy.setCachedState(getHAServiceStateWithTimeout(currentProxy));
 LOG.debug("Changed current proxy from {} to {}",
 initial == null ? "none" : initial.proxyInfo,
 currentProxy.proxyInfo);
 return currentProxy;
   }
 
+  /**
+   * Execute getHAServiceState() call with a timeout, to avoid a long wait when
+   * an NN becomes irresponsive to rpc requests
+   * (when a thread/heap dump is being taken, e.g.).
+   *
+   * For each getHAServiceState() call, a task is created and submitted to a
+   * threadpool for execution. We will wait for a response up to
+   * namenodeHAStateProbeTimeoutSec and cancel these requests if they time out.
+   *
+   * The implemention is split into two functions so that we can unit test
+   * the second function.
+   */
+  HAServiceState getHAServiceStateWithTimeout(final NNProxyInfo proxyInfo) {
+
+Callable getHAServiceStateTask =
+new Callable() {
+  @Override
+  public HAServiceState call() {
+return getHAServiceState(proxyInfo);
+  }
+};

Review Comment:
   Replacing with a lambda is a bit cleaner.
   ```
   Callable getHAServiceStateTask = () -> 
getHAServiceState(proxyInfo);
   ```
   
   You can re-introduce the Callable when you need to backport to version 
without support for Java 8.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -284,13 +319,68 @@ private synchronized NNProxyInfo 
changeProxy(NNProxyInfo initial) {
 }
 currentIndex = (currentIndex + 1) % nameNodeProxies.size();
 currentProxy = createProxyIfNeeded(nameNodeProxies.get(currentIndex));
-currentProxy.setCachedState(getHAServiceState(currentProxy));
+currentProxy.setCachedState(getHAServiceStateWithTimeout(currentProxy));
 LOG.debug("Changed current proxy from {} to {}",
 initial == null ? "none" : initial.proxyInfo,
 currentProxy.proxyInfo);
 return currentProxy;
   }
 
+  /**
+   * Execute getHAServiceState() call with a timeout, to avoid a long wait when
+   * an NN becomes irresponsive to rpc requests
+   * (when a thread/heap dump is being taken, e.g.).
+   *
+   * For each getHAServiceState() call, a task is created and submitted to a
+   * threadpool for execution. We will wait for a response up to
+   * namenodeHAStateProbeTimeoutSec and cancel these requests if they time out.
+   *
+   * The 

[GitHub] [hadoop] goiri merged pull request #5693: HDFS-17027. RBF: Adds auto-msync support for clients connecting to routers.

2023-05-31 Thread via GitHub


goiri merged PR #5693:
URL: https://github.com/apache/hadoop/pull/5693


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #5691: HDFS-17026. RBF: NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-31 Thread via GitHub


goiri commented on PR #5691:
URL: https://github.com/apache/hadoop/pull/5691#issuecomment-1570615115

   I think we have approval quorum.
   I'll go ahead and merge it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a diff in pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


goiri commented on code in PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#discussion_r1212047828


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -284,13 +319,68 @@ private synchronized NNProxyInfo 
changeProxy(NNProxyInfo initial) {
 }
 currentIndex = (currentIndex + 1) % nameNodeProxies.size();
 currentProxy = createProxyIfNeeded(nameNodeProxies.get(currentIndex));
-currentProxy.setCachedState(getHAServiceState(currentProxy));
+currentProxy.setCachedState(getHAServiceStateWithTimeout(currentProxy));
 LOG.debug("Changed current proxy from {} to {}",
 initial == null ? "none" : initial.proxyInfo,
 currentProxy.proxyInfo);
 return currentProxy;
   }
 
+  /**
+   * Execute getHAServiceState() call with a timeout, to avoid a long wait when
+   * an NN becomes irresponsive to rpc requests
+   * (when a thread/heap dump is being taken, e.g.).
+   *
+   * For each getHAServiceState() call, a task is created and submitted to a
+   * threadpool for execution. We will wait for a response up to
+   * namenodeHAStateProbeTimeoutSec and cancel these requests if they time out.
+   *
+   * The implemention is split into two functions so that we can unit test
+   * the second function.
+   */
+  HAServiceState getHAServiceStateWithTimeout(final NNProxyInfo proxyInfo) {
+
+Callable getHAServiceStateTask =

Review Comment:
   Can this be a lambda?



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -88,6 +96,16 @@
   /** Observer probe retry period default to 10 min. */
   static final long OBSERVER_PROBE_RETRY_PERIOD_DEFAULT = 60 * 10 * 1000;
 
+  /** Timeout to cancel the ha-state probe rpc request for an namenode. */
+  static final String NAMENODE_HA_STATE_PROBE_TIMEOUT =
+  "dfs.client.namenode.ha-state.probe.timeout";
+  /**
+   * Namenode ha-state probe timeout default to 25 sec.
+   * ipc.client.connect.timeout defaults to be 20 seconds. So, in 25 seconds,
+   * we can try twice to connect to an NN.
+   */
+  static final long NAMENODE_HA_STATE_PROBE_TIMEOUT_DEFAULT = 25;

Review Comment:
   Use TimeUnit.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -155,12 +173,22 @@
*/
   private long observerProbeRetryPeriodMs;
 
+  /**
+   * Timeout in seconds when we try to get the HA state of an namenode.
+   */
+  @VisibleForTesting
+  private long namenodeHAStateProbeTimeoutSec;
+
   /**
* The previous time where zero observer were found. If there was observer,
* or it is initialization, this is set to 0.
*/
   private long lastObserverProbeTime;
 
+  private final ExecutorService nnProbingThreadPool =
+  new ThreadPoolExecutor(1, 4, 6L, TimeUnit.MILLISECONDS,

Review Comment:
   Give constant names to the magical numbers.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/ObserverReadProxyProvider.java:
##
@@ -284,13 +319,68 @@ private synchronized NNProxyInfo 
changeProxy(NNProxyInfo initial) {
 }
 currentIndex = (currentIndex + 1) % nameNodeProxies.size();
 currentProxy = createProxyIfNeeded(nameNodeProxies.get(currentIndex));
-currentProxy.setCachedState(getHAServiceState(currentProxy));
+currentProxy.setCachedState(getHAServiceStateWithTimeout(currentProxy));
 LOG.debug("Changed current proxy from {} to {}",
 initial == null ? "none" : initial.proxyInfo,
 currentProxy.proxyInfo);
 return currentProxy;
   }
 
+  /**
+   * Execute getHAServiceState() call with a timeout, to avoid a long wait when
+   * an NN becomes irresponsive to rpc requests
+   * (when a thread/heap dump is being taken, e.g.).
+   *
+   * For each getHAServiceState() call, a task is created and submitted to a
+   * threadpool for execution. We will wait for a response up to
+   * namenodeHAStateProbeTimeoutSec and cancel these requests if they time out.
+   *
+   * The implemention is split into two functions so that we can unit test
+   * the second function.
+   */
+  HAServiceState getHAServiceStateWithTimeout(final NNProxyInfo proxyInfo) {
+
+Callable getHAServiceStateTask =
+new Callable() {
+  @Override
+  public HAServiceState call() {
+return getHAServiceState(proxyInfo);
+  }
+};
+
+try {
+  Future task =
+  nnProbingThreadPool.submit(getHAServiceStateTask);
+  return getHAServiceStateWithTimeout(proxyInfo, task);
+} catch (RejectedExecutionException e) {
+  LOG.debug("Run out of threads to submit the request to query HA state. "
+  + "Ok to return null and we will fallback to use active NN to serve "
+  + "this request.");
+  

[GitHub] [hadoop] hadoop-yetus commented on pull request #3618: YARN-11000. Replace queue resource calculation logic in updateClusterResource

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #3618:
URL: https://github.com/apache/hadoop/pull/3618#issuecomment-1570600750

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  jsonlint  |   0m  1s |  |  jsonlint was not available.  |
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 72 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 52s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 50s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3618/19/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 126 new + 1158 unchanged - 6 fixed = 1284 total (was 1164) 
 |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 118m 23s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3618/19/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 216m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoCreatedQueuePreemption
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestAbsoluteResourceConfiguration
 |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerWeightMode
 |
   |   | 
hadoop.yarn.server.resourcemanager.webapp.TestRMWebServiceAppsNodelabel |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerNewQueueAutoCreation
 |
   |   | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueParsing 
|
   |   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5705: YARN-11502. Refactor AMRMProxy#FederationInterceptor#registerApplicationMaster.

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5705:
URL: https://github.com/apache/hadoop/pull/5705#issuecomment-1570592864

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 35s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m 59s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 117m  8s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5705/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5705 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 28e3fdf7a1ce 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 69a5eca928cd476c5db84f32be3d653f3c20d26b |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5705/1/testReport/ |
   | Max. process+thread count | 709 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5705/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above 

[GitHub] [hadoop] simbadzina commented on a diff in pull request #5702: YARN-11500. Fix typos in hadoop-yarn-server-common#federation.

2023-05-31 Thread via GitHub


simbadzina commented on code in PR #5702:
URL: https://github.com/apache/hadoop/pull/5702#discussion_r1212010564


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/federation/policies/manager/FederationPolicyManager.java:
##
@@ -25,14 +25,14 @@
 
 /**
  *
- * Implementors need to provide the ability to serliaze a policy and its
+ * Implementors need to provide the ability to serialize a policy and its
  * configuration as a {@link SubClusterPolicyConfiguration}, as well as provide
  * (re)initialization mechanics for the underlying
  * {@link FederationAMRMProxyPolicy} and {@link FederationRouterPolicy}.
  *
  * The serialization aspects are used by admin APIs or a policy engine to store
  * a serialized configuration in the {@code FederationStateStore}, while the
- * getters methods are used to obtain a propertly inizialized policy in the
+ * getters methods are used to obtain a property initialized policy in the

Review Comment:
   Should this be `properly`?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5676: YARN-6648. BackPort [GPG] Add SubClusterCleaner in Global Policy Generator.

2023-05-31 Thread via GitHub


slfan1989 commented on PR #5676:
URL: https://github.com/apache/hadoop/pull/5676#issuecomment-1570412732

   @goiri Can you help to merge this pr into the trunk branch? Thank you very 
much! I will continue to backport YARN-7707.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 opened a new pull request, #5705: YARN-11502. Refactor AMRMProxy#FederationInterceptor#registerApplicationMaster.

2023-05-31 Thread via GitHub


slfan1989 opened a new pull request, #5705:
URL: https://github.com/apache/hadoop/pull/5705

   
   
   ### Description of PR
   
   JIRA: YARN-11502. Refactor 
AMRMProxy#FederationInterceptor#registerApplicationMaster.
   
   Refactor the code of FederationInterceptor#registerApplicationMaster to 
improve code readability and remove redundancy.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5698: HDFS-17029. Support getECPolices API in WebHDFS

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5698:
URL: https://github.com/apache/hadoop/pull/5698#issuecomment-1570384537

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  4s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   5m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   5m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 55s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   4m 55s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 30s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 29s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 202m 35s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5698/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 336m 56s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5698/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5698 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux d32fd60ca720 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d51741415c7061e50a6bcb257a04bcf228fbcc91 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5698/4/testReport/ |
   | Max. process+thread count | 3599 (vs. 

[jira] [Commented] (HADOOP-18740) s3a prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727999#comment-17727999
 ] 

ASF GitHub Bot commented on HADOOP-18740:
-

steveloughran commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1211811893


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -99,6 +106,32 @@ public String toString() {
   "([%03d] %s: size = %d, checksum = %d)",
   blockNumber, path, size, checksum);
 }
+
+/**
+ * Take the read or write lock.
+ *
+ * @param lockType type of the lock.
+ */
+void takeLock(LockType lockType) {
+  if (LockType.READ == lockType) {
+this.lock.readLock().lock();

Review Comment:
   remove the `this.`



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -99,6 +106,32 @@ public String toString() {
   "([%03d] %s: size = %d, checksum = %d)",
   blockNumber, path, size, checksum);
 }
+
+/**
+ * Take the read or write lock.
+ *
+ * @param lockType type of the lock.
+ */
+void takeLock(LockType lockType) {
+  if (LockType.READ == lockType) {
+this.lock.readLock().lock();
+  } else if (LockType.WRITE == lockType) {
+this.lock.writeLock().lock();
+  }
+}
+
+/**
+ * Release the read or write lock.
+ *
+ * @param lockType type of the lock.
+ */
+void releaseLock(LockType lockType) {
+  if (LockType.READ == lockType) {
+this.lock.readLock().unlock();

Review Comment:
   remove the `this.`



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -268,12 +310,15 @@ public void close() throws IOException {
 int numFilesDeleted = 0;
 
 for (Entry entry : blocks.values()) {
+  entry.takeLock(Entry.LockType.WRITE);

Review Comment:
   should we be acquiring locks in close()?
   good: no race condition in close
   bad) the usual
   
   also, L303: should closed be atomic?





> s3a prefetch cache blocks should be accessed by RW locks
> 
>
> Key: HADOOP-18740
> URL: https://issues.apache.org/jira/browse/HADOOP-18740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> In order to implement LRU or LFU based cache removal policies for s3a 
> prefetched cache blocks, it is important for all cache reader threads to 
> acquire read lock and similarly cache file removal mechanism (fs close or 
> cache eviction) to acquire write lock before accessing the files.
> As we maintain the block entries in an in-memory map, we should be able to 
> introduce read-write lock per cache file entry, we don't need coarse-grained 
> lock shared by all entries.
>  
> This is a prerequisite to HADOOP-18291.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a diff in pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread via GitHub


steveloughran commented on code in PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#discussion_r1211811893


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -99,6 +106,32 @@ public String toString() {
   "([%03d] %s: size = %d, checksum = %d)",
   blockNumber, path, size, checksum);
 }
+
+/**
+ * Take the read or write lock.
+ *
+ * @param lockType type of the lock.
+ */
+void takeLock(LockType lockType) {
+  if (LockType.READ == lockType) {
+this.lock.readLock().lock();

Review Comment:
   remove the `this.`



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -99,6 +106,32 @@ public String toString() {
   "([%03d] %s: size = %d, checksum = %d)",
   blockNumber, path, size, checksum);
 }
+
+/**
+ * Take the read or write lock.
+ *
+ * @param lockType type of the lock.
+ */
+void takeLock(LockType lockType) {
+  if (LockType.READ == lockType) {
+this.lock.readLock().lock();
+  } else if (LockType.WRITE == lockType) {
+this.lock.writeLock().lock();
+  }
+}
+
+/**
+ * Release the read or write lock.
+ *
+ * @param lockType type of the lock.
+ */
+void releaseLock(LockType lockType) {
+  if (LockType.READ == lockType) {
+this.lock.readLock().unlock();

Review Comment:
   remove the `this.`



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/prefetch/SingleFilePerBlockCache.java:
##
@@ -268,12 +310,15 @@ public void close() throws IOException {
 int numFilesDeleted = 0;
 
 for (Entry entry : blocks.values()) {
+  entry.takeLock(Entry.LockType.WRITE);

Review Comment:
   should we be acquiring locks in close()?
   good: no race condition in close
   bad) the usual
   
   also, L303: should closed be atomic?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18740) s3a prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727993#comment-17727993
 ] 

ASF GitHub Bot commented on HADOOP-18740:
-

steveloughran commented on PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#issuecomment-1570335711

   @MayankSinghParmar get on the hadoop hdfs mailing list and discuss there.
   
   @virajjasani can you rebase and retest?




> s3a prefetch cache blocks should be accessed by RW locks
> 
>
> Key: HADOOP-18740
> URL: https://issues.apache.org/jira/browse/HADOOP-18740
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> In order to implement LRU or LFU based cache removal policies for s3a 
> prefetched cache blocks, it is important for all cache reader threads to 
> acquire read lock and similarly cache file removal mechanism (fs close or 
> cache eviction) to acquire write lock before accessing the files.
> As we maintain the block entries in an in-memory map, we should be able to 
> introduce read-write lock per cache file entry, we don't need coarse-grained 
> lock shared by all entries.
>  
> This is a prerequisite to HADOOP-18291.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5675: HADOOP-18740. S3A prefetch cache blocks should be accessed by RW locks

2023-05-31 Thread via GitHub


steveloughran commented on PR #5675:
URL: https://github.com/apache/hadoop/pull/5675#issuecomment-1570335711

   @MayankSinghParmar get on the hadoop hdfs mailing list and discuss there.
   
   @virajjasani can you rebase and retest?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18756) CachingBlockManager to use AtomicBoolean for closed flag

2023-05-31 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18756:

Summary: CachingBlockManager to use AtomicBoolean for closed flag  (was: 
CachingBlockManager to use AtomicBoolean for closed fiag)

> CachingBlockManager to use AtomicBoolean for closed flag
> 
>
> Key: HADOOP-18756
> URL: https://issues.apache.org/jira/browse/HADOOP-18756
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.9
>Reporter: Steve Loughran
>Priority: Major
>
> the {{CachingBlockManager}} uses the boolean field {{closed)) in various 
> operations, including a do/while loop. to ensure the flag is correctly 
> updated across threads, it needs to move to an atomic boolean.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18756) CachingBlockManager to use AtomicBoolean for closed fiag

2023-05-31 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-18756:
---

 Summary: CachingBlockManager to use AtomicBoolean for closed fiag
 Key: HADOOP-18756
 URL: https://issues.apache.org/jira/browse/HADOOP-18756
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.9
Reporter: Steve Loughran


the {{CachingBlockManager}} uses the boolean field {{closed)) in various 
operations, including a do/while loop. to ensure the flag is correctly updated 
across threads, it needs to move to an atomic boolean.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomicooler commented on pull request #5623: YARN-11498. Exclude jettison from jersey-json artifact as on older version is being pulled

2023-05-31 Thread via GitHub


tomicooler commented on PR #5623:
URL: https://github.com/apache/hadoop/pull/5623#issuecomment-1570310784

   Hi @devaspatikrishnatri ,
   
   you can re-trigger the build by creating an empty commit `git commit 
--allow-empty -m "Trigger the jenkins job"` then pushing it to your fork's 
branch `git push origin HADOOP-18732`.
   
   I think the reason for this exclusion should be documented in the Jira, as 
far as I know the reason being:
   
   ```
   An older version of Jetty is being pulled in by jersey-json artifact in 
hadoop-yarn-common, which contains CVEs.
   ```
   
   https://mvnrepository.com/artifact/com.sun.jersey/jersey-json/1.19.4
   
   
   BTW jackson-mapper-asl 1.9.2 has also 2 CVEs 
(https://mvnrepository.com/artifact/org.codehaus.jackson/jackson-mapper-asl/1.9.2).
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


lfxy commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211724811


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new Path(testPathDir);
+dfs2.mkdirs(dir);
+Path trg = new Path(testPathDir, "trg");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
true.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"tardis"});
+DistributedFileSystem hdfs1 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs1.concat(trg, new Path[] {src});
+  fail("Permission exception expected");
+} catch (IOException ie) {
+  LOG.info("Got expected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
+cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+dfs2 = cluster2.getFileSystem();
+dfs2.mkdirs(dir);
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
false.
+DistributedFileSystem hdfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs2.concat(trg, new Path[] {src});
+} catch (IOException ie) {
+  fail("Got unexpected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+  }
+
+  /**
+   * Test permissions of Concat operation.
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+String testPathDir = "/dir";
+Path dir = new Path(testPathDir);
+dfs.mkdirs(dir);
+dfs.setPermission(dir, new FsPermission((short) 0777));
+
+Path dst = new Path(testPathDir, "dst");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1);
+
+// Create a user who is not the owner of the file and try concat operation.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"group"});
+DistributedFileSystem dfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf);
+
+// Test 1: User is not the owner of the file and has src & dst permission.
+DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1);
+dfs.setPermission(dst, new FsPermission((short) 0777));
+dfs.setPermission(src, new FsPermission((short) 0777));
+try {
+  dfs2.concat(dst, new Path[] {src});
+} catch (AccessControlException ace) {
+  fail("Got unexpected exception:" + ace.getLocalizedMessage());
+}

Review Comment:
   @ayushtkn I have update the UT, please help to review. Thank you for your 
patient answer.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18709) Add curator based ZooKeeper communication support over SSL/TLS into the common library

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727938#comment-17727938
 ] 

ASF GitHub Bot commented on HADOOP-18709:
-

hadoop-yetus commented on PR #5638:
URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1570164034

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  16m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  15m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 19s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 191m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/29/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5638 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux e7c99d323ae3 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7e9ab3d63e82a21973371c48512833b0cc7a7827 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/29/testReport/ |
   | Max. process+thread count | 3144 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/29/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5638: HADOOP-18709. Add curator based ZooKeeper communication support over…

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5638:
URL: https://github.com/apache/hadoop/pull/5638#issuecomment-1570164034

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  17m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |  16m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |  16m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |  15m 57s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 19s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 191m 21s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/29/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5638 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux e7c99d323ae3 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7e9ab3d63e82a21973371c48512833b0cc7a7827 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/29/testReport/ |
   | Max. process+thread count | 3144 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5638/29/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific 

[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


lfxy commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211643942


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new Path(testPathDir);
+dfs2.mkdirs(dir);
+Path trg = new Path(testPathDir, "trg");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
true.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"tardis"});
+DistributedFileSystem hdfs1 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs1.concat(trg, new Path[] {src});
+  fail("Permission exception expected");
+} catch (IOException ie) {
+  LOG.info("Got expected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
+cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+dfs2 = cluster2.getFileSystem();
+dfs2.mkdirs(dir);
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
false.
+DistributedFileSystem hdfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs2.concat(trg, new Path[] {src});
+} catch (IOException ie) {
+  fail("Got unexpected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+  }
+
+  /**
+   * Test permissions of Concat operation.
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+String testPathDir = "/dir";
+Path dir = new Path(testPathDir);
+dfs.mkdirs(dir);
+dfs.setPermission(dir, new FsPermission((short) 0777));
+
+Path dst = new Path(testPathDir, "dst");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1);
+
+// Create a user who is not the owner of the file and try concat operation.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"group"});
+DistributedFileSystem dfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf);
+
+// Test 1: User is not the owner of the file and has src & dst permission.
+DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1);
+dfs.setPermission(dst, new FsPermission((short) 0777));
+dfs.setPermission(src, new FsPermission((short) 0777));
+try {
+  dfs2.concat(dst, new Path[] {src});
+} catch (AccessControlException ace) {
+  fail("Got unexpected exception:" + ace.getLocalizedMessage());
+}

Review Comment:
   @ayushtkn Could you give me an example to use finally block with 
LambdaTestUtils.intercept like below. Thanks.
   LambdaTestUtils.intercept(AccessControlException.class,
   "Permission denied: user=theDoctor, access=WRITE",
   () -> hdfs1.concat(trg, new Path[] {src}));
   cluster2.shutdown();



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


ayushtkn commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211602315


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new Path(testPathDir);
+dfs2.mkdirs(dir);
+Path trg = new Path(testPathDir, "trg");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
true.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"tardis"});
+DistributedFileSystem hdfs1 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs1.concat(trg, new Path[] {src});
+  fail("Permission exception expected");
+} catch (IOException ie) {
+  LOG.info("Got expected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
+cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+dfs2 = cluster2.getFileSystem();
+dfs2.mkdirs(dir);
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
false.
+DistributedFileSystem hdfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs2.concat(trg, new Path[] {src});
+} catch (IOException ie) {
+  fail("Got unexpected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+  }
+
+  /**
+   * Test permissions of Concat operation.
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+String testPathDir = "/dir";
+Path dir = new Path(testPathDir);
+dfs.mkdirs(dir);
+dfs.setPermission(dir, new FsPermission((short) 0777));
+
+Path dst = new Path(testPathDir, "dst");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1);
+
+// Create a user who is not the owner of the file and try concat operation.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"group"});
+DistributedFileSystem dfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf);
+
+// Test 1: User is not the owner of the file and has src & dst permission.
+DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1);
+dfs.setPermission(dst, new FsPermission((short) 0777));
+dfs.setPermission(src, new FsPermission((short) 0777));
+try {
+  dfs2.concat(dst, new Path[] {src});
+} catch (AccessControlException ace) {
+  fail("Got unexpected exception:" + ace.getLocalizedMessage());
+}

Review Comment:
   1. If it doesn’t throw exception no need of lambdatest utils
   2. Yes, use finally to shutdown the cluster 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


lfxy commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211583457


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new Path(testPathDir);
+dfs2.mkdirs(dir);
+Path trg = new Path(testPathDir, "trg");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
true.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"tardis"});
+DistributedFileSystem hdfs1 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs1.concat(trg, new Path[] {src});
+  fail("Permission exception expected");
+} catch (IOException ie) {
+  LOG.info("Got expected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
+cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+dfs2 = cluster2.getFileSystem();
+dfs2.mkdirs(dir);
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
false.
+DistributedFileSystem hdfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs2.concat(trg, new Path[] {src});
+} catch (IOException ie) {
+  fail("Got unexpected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+  }
+
+  /**
+   * Test permissions of Concat operation.
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+String testPathDir = "/dir";
+Path dir = new Path(testPathDir);
+dfs.mkdirs(dir);
+dfs.setPermission(dir, new FsPermission((short) 0777));
+
+Path dst = new Path(testPathDir, "dst");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1);
+
+// Create a user who is not the owner of the file and try concat operation.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"group"});
+DistributedFileSystem dfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf);
+
+// Test 1: User is not the owner of the file and has src & dst permission.
+DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1);
+dfs.setPermission(dst, new FsPermission((short) 0777));
+dfs.setPermission(src, new FsPermission((short) 0777));
+try {
+  dfs2.concat(dst, new Path[] {src});
+} catch (AccessControlException ace) {
+  fail("Got unexpected exception:" + ace.getLocalizedMessage());
+}

Review Comment:
   1. I mean if it doesn't thrown  exception, what should use 
LambdaTestUtils.intercept ?
   2. After used LambdaTestUtils.intercept , could I use finally block to call 
cluster2.shutdown()? 
   @ayushtkn 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4990: HDFS-13507. RBF RouterAdmin disable update functionality in add cmd and supports add/update one mount table with different destination

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #4990:
URL: https://github.com/apache/hadoop/pull/4990#issuecomment-1570039142

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 17s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  markdownlint  |   0m  2s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 19s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | -1 :x: |  compile  |   5m 32s | 
[/branch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-compile-hadoop-hdfs-project-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt)
 |  hadoop-hdfs-project in trunk failed with JDK Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.  |
   | -0 :warning: |  checkstyle  |   0m 27s | 
[/buildtool-branch-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/buildtool-branch-checkstyle-hadoop-hdfs-project.txt)
 |  The patch fails to run checkstyle in hadoop-hdfs-project  |
   | -1 :x: |  mvnsite  |   0m 34s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  mvnsite  |   0m 36s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in trunk failed.  |
   | -1 :x: |  javadoc  |   0m 29s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt)
 |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.  |
   | -1 :x: |  javadoc  |   0m 29s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.txt)
 |  hadoop-hdfs-rbf in trunk failed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1.  |
   | -1 :x: |  javadoc  |   0m 30s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt)
 |  hadoop-hdfs in trunk failed with JDK Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.  |
   | -1 :x: |  javadoc  |   0m 31s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.txt)
 |  hadoop-hdfs-rbf in trunk failed with JDK Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09.  |
   | -1 :x: |  spotbugs  |   0m 31s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in trunk failed.  |
   | -1 :x: |  spotbugs  |   0m 32s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4990/8/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-rbf in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |   5m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  | 

[GitHub] [hadoop] ayushtkn merged pull request #5699: HDFS-17000. Fix faulty loop condition in TestDFSStripedOutputStreamUpdatePipeline

2023-05-31 Thread via GitHub


ayushtkn merged PR #5699:
URL: https://github.com/apache/hadoop/pull/5699


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18752) Change fs.s3a.directory.marker.retention to "keep"

2023-05-31 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17727884#comment-17727884
 ] 

ASF GitHub Bot commented on HADOOP-18752:
-

ayushtkn commented on PR #5689:
URL: https://github.com/apache/hadoop/pull/5689#issuecomment-1569931307

   yep, it is like that, lot of discussions and tickets around this, example: 
[HDFS-13505](https://issues.apache.org/jira/browse/HDFS-13505), this is also 
marked as incompatible and was pushed only to trunk. This comment also says the 
same thing 
(https://issues.apache.org/jira/browse/HDFS-13505?focusedCommentId=16854777=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16854777)
   
   may be the reason is like if someone had a use case like he explicitly 
wanted the conf to be "delete" for whatever reasons and the default value was 
also "delete", he didn't configure it considering the default value, now if you 
change it to "keep", that guy who explicitly wanted the value to be delete, he 
has to change and have to configure it to "delete" to preserve his old 
behaviour.
   
   Not against this change, just telling the generic stuff around config 
defaults, what I have read or know about the compat :-) 




> Change fs.s3a.directory.marker.retention to "keep"
> --
>
> Key: HADOOP-18752
> URL: https://issues.apache.org/jira/browse/HADOOP-18752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> Change the default value of "fs.s3a.directory.marker.retention" to keep; 
> update docs to match.
> maybe include with HADOOP-17802 so we don't blow up with fewer markers being 
> created.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #5689: HADOOP-18752. Change fs.s3a.directory.marker.retention to "keep"

2023-05-31 Thread via GitHub


ayushtkn commented on PR #5689:
URL: https://github.com/apache/hadoop/pull/5689#issuecomment-1569931307

   yep, it is like that, lot of discussions and tickets around this, example: 
[HDFS-13505](https://issues.apache.org/jira/browse/HDFS-13505), this is also 
marked as incompatible and was pushed only to trunk. This comment also says the 
same thing 
(https://issues.apache.org/jira/browse/HDFS-13505?focusedCommentId=16854777=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16854777)
   
   may be the reason is like if someone had a use case like he explicitly 
wanted the conf to be "delete" for whatever reasons and the default value was 
also "delete", he didn't configure it considering the default value, now if you 
change it to "keep", that guy who explicitly wanted the value to be delete, he 
has to change and have to configure it to "delete" to preserve his old 
behaviour.
   
   Not against this change, just telling the generic stuff around config 
defaults, what I have read or know about the compat :-) 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


ayushtkn commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211469418


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new Path(testPathDir);
+dfs2.mkdirs(dir);
+Path trg = new Path(testPathDir, "trg");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
true.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"tardis"});
+DistributedFileSystem hdfs1 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs1.concat(trg, new Path[] {src});
+  fail("Permission exception expected");
+} catch (IOException ie) {
+  LOG.info("Got expected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
+cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+dfs2 = cluster2.getFileSystem();
+dfs2.mkdirs(dir);
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
false.
+DistributedFileSystem hdfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs2.concat(trg, new Path[] {src});
+} catch (IOException ie) {
+  fail("Got unexpected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+  }
+
+  /**
+   * Test permissions of Concat operation.
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+String testPathDir = "/dir";
+Path dir = new Path(testPathDir);
+dfs.mkdirs(dir);
+dfs.setPermission(dir, new FsPermission((short) 0777));
+
+Path dst = new Path(testPathDir, "dst");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1);
+
+// Create a user who is not the owner of the file and try concat operation.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"group"});
+DistributedFileSystem dfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf);
+
+// Test 1: User is not the owner of the file and has src & dst permission.
+DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1);
+dfs.setPermission(dst, new FsPermission((short) 0777));
+dfs.setPermission(src, new FsPermission((short) 0777));
+try {
+  dfs2.concat(dst, new Path[] {src});
+} catch (AccessControlException ace) {
+  fail("Got unexpected exception:" + ace.getLocalizedMessage());
+}

Review Comment:
   @lfxy If I got your question right? you mean to say whether 
LambdaTestUtils.intercept can figure out there is no unexception exception, 
then yes.
   
   it works like LambdaTestUtils.intercept(AccessControlException.class, 
"exprected exception message", () -> concat lambda)
   
   this would verify the concat throws ACE and has the specified exception 
message, if it throws anything other than ACE like NPE, the assertion will fail



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] lfxy commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


lfxy commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211433782


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new Path(testPathDir);
+dfs2.mkdirs(dir);
+Path trg = new Path(testPathDir, "trg");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
true.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"tardis"});
+DistributedFileSystem hdfs1 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs1.concat(trg, new Path[] {src});
+  fail("Permission exception expected");
+} catch (IOException ie) {
+  LOG.info("Got expected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
+cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+dfs2 = cluster2.getFileSystem();
+dfs2.mkdirs(dir);
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
false.
+DistributedFileSystem hdfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs2.concat(trg, new Path[] {src});
+} catch (IOException ie) {
+  fail("Got unexpected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+  }
+
+  /**
+   * Test permissions of Concat operation.
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+String testPathDir = "/dir";
+Path dir = new Path(testPathDir);
+dfs.mkdirs(dir);
+dfs.setPermission(dir, new FsPermission((short) 0777));
+
+Path dst = new Path(testPathDir, "dst");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1);
+
+// Create a user who is not the owner of the file and try concat operation.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"group"});
+DistributedFileSystem dfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf);
+
+// Test 1: User is not the owner of the file and has src & dst permission.
+DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1);
+dfs.setPermission(dst, new FsPermission((short) 0777));
+dfs.setPermission(src, new FsPermission((short) 0777));
+try {
+  dfs2.concat(dst, new Path[] {src});
+} catch (AccessControlException ace) {
+  fail("Got unexpected exception:" + ace.getLocalizedMessage());
+}

Review Comment:
   @ayushtkn If LambdaTestUtils.intercept can replace both expected exception 
and unexpected exception? And can I use finally block if i use 
LambdaTestUtils.intercept? Thank you.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18752) Change fs.s3a.directory.marker.retention to "keep"

2023-05-31 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18752:

Labels: pull-request-available  (was: )

> Change fs.s3a.directory.marker.retention to "keep"
> --
>
> Key: HADOOP-18752
> URL: https://issues.apache.org/jira/browse/HADOOP-18752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> Change the default value of "fs.s3a.directory.marker.retention" to keep; 
> update docs to match.
> maybe include with HADOOP-17802 so we don't blow up with fewer markers being 
> created.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhtttylz commented on a diff in pull request #5698: HDFS-17029. Support getECPolices API in WebHDFS

2023-05-31 Thread via GitHub


zhtttylz commented on code in PR #5698:
URL: https://github.com/apache/hadoop/pull/5698#discussion_r1211341558


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java:
##
@@ -741,4 +741,27 @@ public static Map toJsonMap(FsStatus 
status) {
 m.put("remaining", status.getRemaining());
 return m;
   }
+
+  public static Map toJsonMap(ErasureCodingPolicyInfo 
ecPolicyInfo) {
+if (ecPolicyInfo == null) {
+  return null;
+}
+Map m = new HashMap<>();
+m.put("policy", ecPolicyInfo.getPolicy());
+m.put("state", ecPolicyInfo.getState());
+return m;
+  }
+
+  public static String toJsonString(ErasureCodingPolicyInfo[] ecPolicyInfos) {
+final Map erasureCodingPolicies = new TreeMap<>();

Review Comment:
   Thank you for your valuable suggestion. Since sorting of keys is not 
required, it's unnecessary to use `TreeMap`.  I sincerely appreciate it and 
will promptly implement the required adjustments to the code!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5689: HADOOP-18752. Change fs.s3a.directory.marker.retention to "keep"

2023-05-31 Thread via GitHub


steveloughran commented on PR #5689:
URL: https://github.com/apache/hadoop/pull/5689#issuecomment-1569802418

   @ayushtkn really? default values are immutable. not sure about that as a lot 
of things change implicitly, or for good reason "the defaults weren't good". 
   
   while the names+meanings+units are all stable, default values are 
public/evolving


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a diff in pull request #4990: HDFS-13507. RBF RouterAdmin disable update functionality in add cmd and supports add/update one mount table with different destina

2023-05-31 Thread via GitHub


ayushtkn commented on code in PR #4990:
URL: https://github.com/apache/hadoop/pull/4990#discussion_r1211229496


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java:
##
@@ -699,32 +697,52 @@ public boolean addMount(AddMountAttributes 
addMountAttributes)
 // Get the existing entry
 MountTableManager mountTable = client.getMountTableManager();
 MountTable existingEntry = getMountEntry(mount, mountTable);
-MountTable existingOrNewEntry =
-
addMountAttributes.getNewOrUpdatedMountTableEntryWithAttributes(existingEntry);
-if (existingOrNewEntry == null) {
+if (existingEntry != null) {
+  System.err.println("MountTable entry:" + mount + " already exists.");
   return false;
 }
 
-if (existingEntry == null) {
-  AddMountTableEntryRequest request = AddMountTableEntryRequest
-  .newInstance(existingOrNewEntry);
-  AddMountTableEntryResponse addResponse = 
mountTable.addMountTableEntry(request);
-  boolean added = addResponse.getStatus();
-  if (!added) {
-System.err.println("Cannot add mount point " + mount);
-  }
-  return added;
-} else {
-  UpdateMountTableEntryRequest updateRequest =
-  UpdateMountTableEntryRequest.newInstance(existingOrNewEntry);
-  UpdateMountTableEntryResponse updateResponse =
-  mountTable.updateMountTableEntry(updateRequest);
-  boolean updated = updateResponse.getStatus();
-  if (!updated) {
-System.err.println("Cannot update mount point " + mount);
-  }
-  return updated;
+MountTable mountEntry = 
addMountAttributes.getMountTableEntryWithAttributes();
+AddMountTableEntryRequest request = 
AddMountTableEntryRequest.newInstance(mountEntry);
+AddMountTableEntryResponse addResponse = 
mountTable.addMountTableEntry(request);
+boolean added = addResponse.getStatus();
+if (!added) {
+  System.err.println("Cannot add mount point " + mount);
 }
+return added;
+  }
+
+  /**
+   * Return the map from namespace to destination.
+   * @param nss input namespaces.
+   * @param destinations input destinations.
+   * @return one map from namespace to destination.
+   * @throws IOException throw IOException if the destinations is invalida.

Review Comment:
   typo ``invalida``



##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/tools/federation/RouterAdmin.java:
##
@@ -699,32 +697,52 @@ public boolean addMount(AddMountAttributes 
addMountAttributes)
 // Get the existing entry
 MountTableManager mountTable = client.getMountTableManager();
 MountTable existingEntry = getMountEntry(mount, mountTable);
-MountTable existingOrNewEntry =
-
addMountAttributes.getNewOrUpdatedMountTableEntryWithAttributes(existingEntry);
-if (existingOrNewEntry == null) {
+if (existingEntry != null) {
+  System.err.println("MountTable entry:" + mount + " already exists.");
   return false;
 }
 
-if (existingEntry == null) {
-  AddMountTableEntryRequest request = AddMountTableEntryRequest
-  .newInstance(existingOrNewEntry);
-  AddMountTableEntryResponse addResponse = 
mountTable.addMountTableEntry(request);
-  boolean added = addResponse.getStatus();
-  if (!added) {
-System.err.println("Cannot add mount point " + mount);
-  }
-  return added;
-} else {
-  UpdateMountTableEntryRequest updateRequest =
-  UpdateMountTableEntryRequest.newInstance(existingOrNewEntry);
-  UpdateMountTableEntryResponse updateResponse =
-  mountTable.updateMountTableEntry(updateRequest);
-  boolean updated = updateResponse.getStatus();
-  if (!updated) {
-System.err.println("Cannot update mount point " + mount);
-  }
-  return updated;
+MountTable mountEntry = 
addMountAttributes.getMountTableEntryWithAttributes();
+AddMountTableEntryRequest request = 
AddMountTableEntryRequest.newInstance(mountEntry);
+AddMountTableEntryResponse addResponse = 
mountTable.addMountTableEntry(request);
+boolean added = addResponse.getStatus();
+if (!added) {
+  System.err.println("Cannot add mount point " + mount);
 }
+return added;
+  }
+
+  /**
+   * Return the map from namespace to destination.
+   * @param nss input namespaces.
+   * @param destinations input destinations.
+   * @return one map from namespace to destination.
+   * @throws IOException throw IOException if the destinations is invalida.
+   */
+  public static Map getDestMap(String[] nss, String[] 
destinations)
+  throws IOException {
+if (isInvalidDestinations(nss, destinations)) {
+  String message = "Invalid number of namespaces and destinations. The 
number of destinations: "
+  + destinations.length + " is not equal to the number of namespaces: 
" + nss.length;
+  throw new IOException(message);
+}
+Map destMap = 

[GitHub] [hadoop] ayushtkn commented on a diff in pull request #5561: HDFS-16983. Whether checking path access permissions should be decided by dfs.permissions.enabled in concat operation

2023-05-31 Thread via GitHub


ayushtkn commented on code in PR #5561:
URL: https://github.com/apache/hadoop/pull/5561#discussion_r1211225704


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new Path(testPathDir);
+dfs2.mkdirs(dir);
+Path trg = new Path(testPathDir, "trg");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
true.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"tardis"});
+DistributedFileSystem hdfs1 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs1.concat(trg, new Path[] {src});
+  fail("Permission exception expected");
+} catch (IOException ie) {
+  LOG.info("Got expected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, false);
+cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+dfs2 = cluster2.getFileSystem();
+dfs2.mkdirs(dir);
+DFSTestUtil.createFile(dfs2, trg, blockSize, REPL_FACTOR, 1);
+DFSTestUtil.createFile(dfs2, src, blockSize, REPL_FACTOR, 1);
+
+// Check permissions with the wrong user when dfs.permissions.enabled is 
false.
+DistributedFileSystem hdfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf2);
+try {
+  hdfs2.concat(trg, new Path[] {src});
+} catch (IOException ie) {
+  fail("Got unexpected exception for permissions:" + 
ie.getLocalizedMessage());
+}
+dfs2.close();
+cluster2.shutdownDataNodes();
+cluster2.shutdown();
+  }
+
+  /**
+   * Test permissions of Concat operation.
+   */
+  @Test
+  public void testConcatPermissions() throws IOException {
+String testPathDir = "/dir";
+Path dir = new Path(testPathDir);
+dfs.mkdirs(dir);
+dfs.setPermission(dir, new FsPermission((short) 0777));
+
+Path dst = new Path(testPathDir, "dst");
+Path src = new Path(testPathDir, "src");
+DFSTestUtil.createFile(dfs, dst, blockSize, REPL_FACTOR, 1);
+
+// Create a user who is not the owner of the file and try concat operation.
+final UserGroupInformation user =
+UserGroupInformation.createUserForTesting("theDoctor", new String[] 
{"group"});
+DistributedFileSystem dfs2 = (DistributedFileSystem) 
DFSTestUtil.getFileSystemAs(user, conf);
+
+// Test 1: User is not the owner of the file and has src & dst permission.
+DFSTestUtil.createFile(dfs, src, blockSize, REPL_FACTOR, 1);
+dfs.setPermission(dst, new FsPermission((short) 0777));
+dfs.setPermission(src, new FsPermission((short) 0777));
+try {
+  dfs2.concat(dst, new Path[] {src});
+} catch (AccessControlException ace) {
+  fail("Got unexpected exception:" + ace.getLocalizedMessage());
+}

Review Comment:
   Use LambdaTestUtils.intercept, and for the below ones as well



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestHDFSConcat.java:
##
@@ -564,4 +566,113 @@ public void testConcatOnSameFile() throws Exception {
 
 assertEquals(1, dfs.getContentSummary(new Path(dir)).getFileCount());
   }
+
+  /**
+   * Verifies concat with wrong user when dfs.permissions.enabled is false.
+   *
+   * @throws IOException
+   */
+  @Test
+  public void testConcatPermissionEnabled() throws IOException {
+Configuration conf2 = new Configuration();
+conf2.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, blockSize);
+conf2.setBoolean(DFSConfigKeys.DFS_PERMISSIONS_ENABLED_KEY, true);
+MiniDFSCluster cluster2 = new 
MiniDFSCluster.Builder(conf2).numDataNodes(REPL_FACTOR).build();
+cluster2.waitClusterUp();
+DistributedFileSystem dfs2 = cluster2.getFileSystem();
+
+String testPathDir = "/dir2";
+Path dir = new 

[GitHub] [hadoop] hadoop-yetus commented on pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#issuecomment-1569577944

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 43s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   5m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 59s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   5m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   4m 59s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 57s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 22s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 24s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 209m 22s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   2m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 344m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5700 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 2664664269b2 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0953457a06c957f9a30d937a300b3eed14160efe |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/4/testReport/ |
   | Max. process+thread count | 3011 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 

[GitHub] [hadoop] Hexiaoqiao commented on pull request #5691: HDFS-17026. RBF: NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-31 Thread via GitHub


Hexiaoqiao commented on PR #5691:
URL: https://github.com/apache/hadoop/pull/5691#issuecomment-1569570682

   Committed to trunk. Thanks @hchaverri for your contributions! And thanks 
every reviewers (Too many to mentioned folks)!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao merged pull request #5691: HDFS-17026. RBF: NamenodeHeartbeatService should update JMX report with configurable frequency

2023-05-31 Thread via GitHub


Hexiaoqiao merged PR #5691:
URL: https://github.com/apache/hadoop/pull/5691


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4797: YARN-11277. Trigger log-dir deletion by size for NonAggregatingLogHandler

2023-05-31 Thread via GitHub


slfan1989 commented on PR #4797:
URL: https://github.com/apache/hadoop/pull/4797#issuecomment-1569562918

   @leixm Thanks for the contribution! LGTM.
   @aajisaka @ashutoshcipher Can you help to review this PR again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5700: HDFS-17030. Limit wait time for getHAServiceState in ObserverReaderProxy

2023-05-31 Thread via GitHub


hadoop-yetus commented on PR #5700:
URL: https://github.com/apache/hadoop/pull/5700#issuecomment-1569553809

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m  1s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   5m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  compile  |   5m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  checkstyle  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   2m  7s |  |  trunk passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   6m  5s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javac  |   5m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   5m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  javac  |   5m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09  |
   | +1 :green_heart: |  spotbugs  |   5m 58s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 17s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 222m 37s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 366m 25s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5700 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 55df75039f53 4.15.0-206-generic #217-Ubuntu SMP Fri Feb 3 
19:10:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d18b36a0f1881fd236d2e276e6c1e678925cf89c |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_362-8u372-ga~us1-0ubuntu1~20.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5700/3/testReport/ |
   | Max. process+thread count | 2773 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: 

[GitHub] [hadoop] slfan1989 commented on a diff in pull request #5698: HDFS-17029. Support getECPolices API in WebHDFS

2023-05-31 Thread via GitHub


slfan1989 commented on code in PR #5698:
URL: https://github.com/apache/hadoop/pull/5698#discussion_r1211125291


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java:
##
@@ -741,4 +741,27 @@ public static Map toJsonMap(FsStatus 
status) {
 m.put("remaining", status.getRemaining());
 return m;
   }
+
+  public static Map toJsonMap(ErasureCodingPolicyInfo 
ecPolicyInfo) {
+if (ecPolicyInfo == null) {
+  return null;
+}
+Map m = new HashMap<>();
+m.put("policy", ecPolicyInfo.getPolicy());
+m.put("state", ecPolicyInfo.getState());
+return m;
+  }
+
+  public static String toJsonString(ErasureCodingPolicyInfo[] ecPolicyInfos) {
+final Map erasureCodingPolicies = new TreeMap<>();

Review Comment:
   Why TreeMap?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org