[PR] YARN-11623 FairScheduler: Add AM preemption to documentation [hadoop]

2023-12-03 Thread via GitHub


singer-bin opened a new pull request, #6320:
URL: https://github.com/apache/hadoop/pull/6320

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11613. [Federation] Router CLI Supports Delete SubClusterPolicyConfiguration Of Queues. [hadoop]

2023-12-03 Thread via GitHub


hadoop-yetus commented on PR #6295:
URL: https://github.com/apache/hadoop/pull/6295#issuecomment-1837798797

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  34m 28s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  7s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 41s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  38m  6s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 28s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6295/14/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common.txt)
 |  hadoop-yarn-server-common in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 30s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6295/14/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed.  |
   | -1 :x: |  mvninstall  |   0m 19s | 
[/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6295/14/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-router.txt)
 |  hadoop-yarn-server-router in the patch failed.  |
   | -1 :x: |  compile  |   1m 12s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6295/14/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-yarn in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  cc  |   1m 12s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6295/14/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-yarn in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   1m 12s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6295/14/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-yarn in the patch failed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 58s | 
[/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6295/14/artifact/out/patch-compile-hadoop-yarn-project_hadoop-yarn-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-yarn in the patch failed with JDK Private 

Re: [PR] HDFS-16749. RBF: Gets the wrong directory information from Trash [hadoop]

2023-12-03 Thread via GitHub


zhangxiping1 commented on PR #6317:
URL: https://github.com/apache/hadoop/pull/6317#issuecomment-1837794522

   > @zhangxiping1 I see that you created 
[HDFS-16749](https://issues.apache.org/jira/browse/HDFS-16749), pr #5039, and 
@LiuGuH also submitted a PR. Can we compare and explain the difference between 
the two prs?
   Is the same problem, returning the extra entry is the Trash prefix with 
mount point information(top subdirectory)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17254. DataNode httpServer has too many worker threads [hadoop]

2023-12-03 Thread via GitHub


hadoop-yetus commented on PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#issuecomment-1837593823

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  34m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  4s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  34m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 209m  8s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6307/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 356m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.tools.TestHdfsConfigFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6307/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6307 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0d5643f02a25 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 
08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d911ba9f02223385f5e6efd7631b6c4764a47eaf |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6307/2/testReport/ |
   | Max. process+thread count | 2631 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6307/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2

Re: [PR] HDFS-17250. EditLogTailer#triggerActiveLogRoll should handle thread Interrupted [hadoop]

2023-12-03 Thread via GitHub


xinglin commented on PR #6266:
URL: https://github.com/apache/hadoop/pull/6266#issuecomment-1837558511

   @Hexiaoqiao, no, feel free to merge this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17272. NNThroughputBenchmark should support specifying the base directory for multi-client test [hadoop]

2023-12-03 Thread via GitHub


ayushtkn commented on code in PR #6319:
URL: https://github.com/apache/hadoop/pull/6319#discussion_r1413157796


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java:
##
@@ -1348,7 +1363,7 @@ class ReplicationStats extends OperationStatsBase {
 static final String OP_REPLICATION_USAGE = 
 "-op replication [-datanodes T] [-nodesToDecommission D] " +
 "[-nodeReplicationLimit C] [-totalBlocks B] [-blockSize S] "
-+ "[-replication R]";
++ "[-replication R] [-baseDirName N]";

Review Comment:
   somewhere us baseDirName D and here it is baseDirName N
   Shouldn't be inconsistent I believe 



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/NNThroughputBenchmark.java:
##
@@ -736,7 +746,7 @@ class OpenFileStats extends CreateFileStats {
 static final String OP_OPEN_NAME = "open";
 static final String OP_USAGE_ARGS = 
 " [-threads T] [-files N] [-blockSize S] [-filesPerDir P]"
-+ " [-useExisting]";
++ " [-baseDirName D] [-useExisting]";

Review Comment:
   add the new parameter towards the end



##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestNNThroughputBenchmark.java:
##
@@ -60,6 +60,18 @@ public void testNNThroughput() throws Exception {
 NNThroughputBenchmark.runBenchmark(conf, new String[] {"-op", "all"});
   }
 
+  @Test
+  public void testNNThroughputWithBaseDir() throws Exception {
+Configuration conf = new HdfsConfiguration();
+conf.setInt(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 16);
+File nameDir = new File(MiniDFSCluster.getBaseDirectory(), "name");
+conf.set(DFSConfigKeys.DFS_NAMENODE_NAME_DIR_KEY,
+nameDir.getAbsolutePath());
+DFSTestUtil.formatNameNode(conf);
+NNThroughputBenchmark.runBenchmark(conf,
+new String[] {"-op", "all", "-baseDirName", 
"/nnThroughputBenchmark1"});

Review Comment:
   test should check ``nnThroughputBenchmark1 `` was used & the default 
directory wasn't created nor used.
   
   try with other operations as well apart from ``all``



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2023-12-03 Thread via GitHub


bbeaudreault commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1837536440

   Hi @Neilxzn , any chance you have time to finish this up?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-19001) LD_LIBRARY_PATH is missing HADOOP_COMMON_LIB_NATIVE_DIR

2023-12-03 Thread Zilong Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17792541#comment-17792541
 ] 

Zilong Zhu edited comment on HADOOP-19001 at 12/3/23 3:30 PM:
--

About hadoop-functions.sh, the Hadoop3 script has been rewritten by 
HADOOP-9902. I can't get any more information. I think it makes no sense to 
export LD_LIBRARY_PATH directly. 
After 
https://github.com/apache/hadoop/commit/9d4d30243b0fc9630da51a2c17b543ef671d035c
 env not in yarn.nodemanager.env-whitelist(include LD_LIBRARY_PATH) is not able 
to get from builder.environment() since 
DefaultContainerExecutor#buildCommandExecutor sets inherit to false.
So I'm confused as to whether this is an issue. But it does make it impossible 
for spark and flink tasks to directly load native hadoop lib on yarn.


was (Author: JIRAUSER287487):
About hadoop-functions.sh, the Hadoop3 script has been rewritten by 
HADOOP-9902. I can't get any more information. I think it makes no sense to 
export LD_LIBRARY_PATH directly. 
After 
https://github.com/apache/hadoop/commit/9d4d30243b0fc9630da51a2c17b543ef671d035c
 env not in yarn.nodemanager.env-whitelist(include LD_LIBRARY_PATH) is not able 
to get from builder.environment() since 
DefaultContainerExecutor#buildCommandExecutor sets inherit to false.
So I'm confused as to whether this is an issue.

> LD_LIBRARY_PATH is missing HADOOP_COMMON_LIB_NATIVE_DIR
> ---
>
> Key: HADOOP-19001
> URL: https://issues.apache.org/jira/browse/HADOOP-19001
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.4
>Reporter: Zilong Zhu
>Priority: Major
>
> When we run a spark job, we find that it cannot load the native library 
> successfully.
> We found a difference between hadoop2 and hadoop3.
> hadoop2-Spark-System Properties:
> |java.library.path|:/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib|
> hadoop3-Spark-System Properties:
> |java.library.path|:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib|
> The key point is:
> hadoop2-hadoop-config.sh:
> HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"   <--267
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JAVA_LIBRARY_PATH     <--268
>  
> hadoop3-hadoop-functions.sh:
> hadoop_add_param HADOOP_OPTS java.library.path \
> "-Djava.library.path=${JAVA_LIBRARY_PATH}"
> export LD_LIBRARY_PATH        <--1484
>  
> At the same time, the hadoop3 will clear all non-whitelisted environment 
> variables.
> I'm not sure if it was intentional. But it makes our spark job unable to find 
> the native library on hadoop3. 
> Maybe we should modify hadoop-functions.sh(1484) and add LD_LIBRARY_PATH to 
> the default configuration item yarn.nodemanager.env-whitelist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19001) LD_LIBRARY_PATH is missing HADOOP_COMMON_LIB_NATIVE_DIR

2023-12-03 Thread Zilong Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17792541#comment-17792541
 ] 

Zilong Zhu commented on HADOOP-19001:
-

About hadoop-functions.sh, the Hadoop3 script has been rewritten by 
HADOOP-9902. I can't get any more information. I think it makes no sense to 
export LD_LIBRARY_PATH directly. 
After 
https://github.com/apache/hadoop/commit/9d4d30243b0fc9630da51a2c17b543ef671d035c
 env not in yarn.nodemanager.env-whitelist(include LD_LIBRARY_PATH) is not able 
to get from builder.environment() since 
DefaultContainerExecutor#buildCommandExecutor sets inherit to false.
So I'm confused as to whether this is an issue.

> LD_LIBRARY_PATH is missing HADOOP_COMMON_LIB_NATIVE_DIR
> ---
>
> Key: HADOOP-19001
> URL: https://issues.apache.org/jira/browse/HADOOP-19001
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.4
>Reporter: Zilong Zhu
>Priority: Major
>
> When we run a spark job, we find that it cannot load the native library 
> successfully.
> We found a difference between hadoop2 and hadoop3.
> hadoop2-Spark-System Properties:
> |java.library.path|:/hadoop/lib/native:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib|
> hadoop3-Spark-System Properties:
> |java.library.path|:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib|
> The key point is:
> hadoop2-hadoop-config.sh:
> HADOOP_OPTS="$HADOOP_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"   <--267
> export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$JAVA_LIBRARY_PATH     <--268
>  
> hadoop3-hadoop-functions.sh:
> hadoop_add_param HADOOP_OPTS java.library.path \
> "-Djava.library.path=${JAVA_LIBRARY_PATH}"
> export LD_LIBRARY_PATH        <--1484
>  
> At the same time, the hadoop3 will clear all non-whitelisted environment 
> variables.
> I'm not sure if it was intentional. But it makes our spark job unable to find 
> the native library on hadoop3. 
> Maybe we should modify hadoop-functions.sh(1484) and add LD_LIBRARY_PATH to 
> the default configuration item yarn.nodemanager.env-whitelist.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11621: Fix intermittently failing unit test: TestAMRMProxy.testAMRMProxyTokenRenewal [hadoop]

2023-12-03 Thread via GitHub


susheelgupta7 commented on code in PR #6310:
URL: https://github.com/apache/hadoop/pull/6310#discussion_r1413125808


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMProxy.java:
##
@@ -156,13 +156,13 @@ public void testAMRMProxyTokenRenewal() throws Exception {
YarnClient rmClient = YarnClient.createYarnClient()) {
   Configuration conf = new YarnConfiguration();
   conf.setBoolean(YarnConfiguration.AMRM_PROXY_ENABLED, true);
-  conf.setInt(YarnConfiguration.RM_NM_EXPIRY_INTERVAL_MS, 4500);
-  conf.setInt(YarnConfiguration.RM_NM_HEARTBEAT_INTERVAL_MS, 4500);
-  conf.setInt(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, 4500);
+  conf.setInt(YarnConfiguration.RM_NM_EXPIRY_INTERVAL_MS, 8000);
+  conf.setInt(YarnConfiguration.RM_NM_HEARTBEAT_INTERVAL_MS, 8000);
+  conf.setInt(YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS, 12000);
   // RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS should be at least
   // RM_AM_EXPIRY_INTERVAL_MS * 1.5 *3
   conf.setInt(
-  YarnConfiguration.RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS, 
20);
+  YarnConfiguration.RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS, 
37);

Review Comment:
   @slfan1989  According to this 
[comment](https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-client/src/test/java/org/apache/hadoop/yarn/client/api/impl/TestAMRMProxy.java#L162-L163)
 
   `// RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS should be at least
 // RM_AM_EXPIRY_INTERVAL_MS * 1.5 *3` (i.e it should be greater than 
54 sec) 
but the code 
[code](https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/security/AMRMTokenSecretManager.java#L107-L112)
 states `YarnConfiguration.RM_AMRM_TOKEN_MASTER_KEY_ROLLING_INTERVAL_SECS
 + " should be more than 3 X "
 + YarnConfiguration.RM_AM_EXPIRY_INTERVAL_MS`. 
So I chose the code and set it as 37 sec.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17270: Fix ZKDelegationTokenSecretManagerImpl use closed zookeep… [hadoop]

2023-12-03 Thread via GitHub


hadoop-yetus commented on PR #6315:
URL: https://github.com/apache/hadoop/pull/6315#issuecomment-1837484342

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 44s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 156m 11s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6315/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6315 |
   | JIRA Issue | HDFS-17270 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 681c175d14d0 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 
15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ae1cfc5cbd2dcef9ff9437613b9fee25501261bf |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6315/2/testReport/ |
   | Max. process+thread count | 3961 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6315/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

T

Re: [PR] HDFS-17254. DataNode httpServer has too many worker threads [hadoop]

2023-12-03 Thread via GitHub


2005hithlj commented on PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#issuecomment-1837476668

   @Hexiaoqiao sir. Thank you for your review, it caused by enable http UI.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17254. DataNode httpServer has too many worker threads [hadoop]

2023-12-03 Thread via GitHub


2005hithlj commented on code in PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#discussion_r1413075732


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java:
##
@@ -966,6 +966,9 @@ public class DFSConfigKeys extends CommonConfigurationKeys {
   public static final String  DFS_DATANODE_HTTP_ADDRESS_DEFAULT = "0.0.0.0:" + 
DFS_DATANODE_HTTP_DEFAULT_PORT;
   public static final String  DFS_DATANODE_HTTP_INTERNAL_PROXY_PORT =
   "dfs.datanode.http.internal-proxy.port";
+  public static final String DFS_DATANODE_NETTY_WORKER_NUM_THREADS_KEY =
+  "dfs.datanode.netty.worker.threads";
+  public static final int DFS_DATANODE_NETTY_WORKER_NUM_THREADS_DEFAULT = 0;

Review Comment:
   @slfan1989 Thank you for your review, the default value should not continue 
to be 0, and I plan to change the default value to 10. What do you think, sir?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17247. Improve AvailableSpaceRackFaultTolerantBlockPlacementPolicy logic [hadoop]

2023-12-03 Thread via GitHub


Hexiaoqiao commented on PR #6245:
URL: https://github.com/apache/hadoop/pull/6245#issuecomment-1837453787

   Thanks involve me here and sorry to be late. I will try to review it next 
week. Thanks again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17250. EditLogTailer#triggerActiveLogRoll should handle thread Interrupted [hadoop]

2023-12-03 Thread via GitHub


Hexiaoqiao commented on PR #6266:
URL: https://github.com/apache/hadoop/pull/6266#issuecomment-1837453460

   Hi @xinglin Do you have anymore concerns? If not, I will to push this RP 
forwards. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17262 Fixed the verbose log.warn in DFSUtil.addTransferRateMetric(). [hadoop]

2023-12-03 Thread via GitHub


Hexiaoqiao commented on code in PR #6290:
URL: https://github.com/apache/hadoop/pull/6290#discussion_r1413054649


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1970,16 +1971,32 @@ public static boolean isParentEntry(final String path, 
final String parent) {
   }
 
   /**
-   * Add transfer rate metrics for valid data read and duration values.
+   * Add transfer rate metrics in bytes per second.
* @param metrics metrics for datanodes
* @param read bytes read
-   * @param duration read duration
+   * @param durationInNS read duration in nanoseconds
*/
-  public static void addTransferRateMetric(final DataNodeMetrics metrics, 
final long read, final long duration) {
-if (read >= 0 && duration > 0) {
-metrics.addReadTransferRate(read * 1000 / duration);
-} else {
-  LOG.warn("Unexpected value for data transfer bytes={} duration={}", 
read, duration);
-}
+  public static void addTransferRateMetric(final DataNodeMetrics metrics, 
final long read,
+  final long durationInNS) {
+metrics.addReadTransferRate(getTransferRateInBytesPerSecond(read, 
durationInNS));
+  }
+
+  /**
+   * Calculate the transfer rate in bytes per second.
+   *
+   * We have the read duration in nanoseconds for precision for transfers 
taking a few nanoseconds.
+   * We treat shorter durations below 1 ns as 1 ns as we also want to capture 
reads taking less
+   * than a nanosecond. To calculate transferRate in bytes per second, we 
avoid multiplying bytes
+   * read by 10^9 to avoid overflow. Instead, we first calculate the duration 
in seconds in double
+   * to keep the decimal values for smaller durations. We then divide bytes 
read by
+   * durationInSeconds to get the transferRate in bytes per second.
+   * @param bytes bytes read
+   * @param durationInNS read duration in nanoseconds
+   * @return bytes per second
+   */
+  public static long getTransferRateInBytesPerSecond(final long bytes, long 
durationInNS) {
+durationInNS = Math.max(durationInNS, 1);
+double durationInSeconds = (double) durationInNS / 
TimeUnit.SECONDS.toNanos(1);
+return (long) (bytes / durationInSeconds);

Review Comment:
   How about to guard `bytes` should be one positive number?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18982) Fix doc about loading native libraries

2023-12-03 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17792514#comment-17792514
 ] 

ASF GitHub Bot commented on HADOOP-18982:
-

Hexiaoqiao commented on code in PR #6281:
URL: https://github.com/apache/hadoop/pull/6281#discussion_r1413053871


##
hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm:
##
@@ -128,8 +128,8 @@ You can load any native shared library using 
DistributedCache for distributing a
 
 This example shows you how to distribute a shared library, mylib.so, and load 
it from a MapReduce task.
 
-1.  First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal 
mylib.so.1 /libraries/mylib.so.1`
-2.  The job launching program should contain the following: 
`DistributedCache.createSymlink(conf);` 
`DistributedCache.addCacheFile("hdfs://host:port/libraries/mylib.so. 
1#mylib.so", conf);`
-3.  The MapReduce task can contain: `System.loadLibrary("mylib.so");`
+1.  First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal 
libmyexample.so.1 /libraries/libmyexample.so.1`
+2.  The job launching program should contain the following: 
`DistributedCache.createSymlink(conf);` 
`DistributedCache.addCacheFile("hdfs://host:port/libraries/libmyexample.so.1#libmyexample.so",
 conf);`
+3.  The MapReduce task can contain: `System.loadLibrary("myexample");`

Review Comment:
   +1. LGTM. One nit comments, we should mark which platform is it suitable, 
such as `Unix-like systems`. because it should be another approach for win32 
platform from the 
[document](https://docs.oracle.com/en/java/javase/11/docs/specs/jni/design.html#compiling-loading-and-linking-native-methods).





> Fix doc about loading native libraries
> --
>
> Key: HADOOP-18982
> URL: https://issues.apache.org/jira/browse/HADOOP-18982
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Shuyan Zhang
>Priority: Major
>  Labels: pull-request-available
>
> When we want load a native library libmyexample.so, the right way is to call 
> System.loadLibrary("myexample") rather than 
> System.loadLibrary("libmyexample.so").



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18982. Fix doc about loading native libraries. [hadoop]

2023-12-03 Thread via GitHub


Hexiaoqiao commented on code in PR #6281:
URL: https://github.com/apache/hadoop/pull/6281#discussion_r1413053871


##
hadoop-common-project/hadoop-common/src/site/markdown/NativeLibraries.md.vm:
##
@@ -128,8 +128,8 @@ You can load any native shared library using 
DistributedCache for distributing a
 
 This example shows you how to distribute a shared library, mylib.so, and load 
it from a MapReduce task.
 
-1.  First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal 
mylib.so.1 /libraries/mylib.so.1`
-2.  The job launching program should contain the following: 
`DistributedCache.createSymlink(conf);` 
`DistributedCache.addCacheFile("hdfs://host:port/libraries/mylib.so. 
1#mylib.so", conf);`
-3.  The MapReduce task can contain: `System.loadLibrary("mylib.so");`
+1.  First copy the library to the HDFS: `bin/hadoop fs -copyFromLocal 
libmyexample.so.1 /libraries/libmyexample.so.1`
+2.  The job launching program should contain the following: 
`DistributedCache.createSymlink(conf);` 
`DistributedCache.addCacheFile("hdfs://host:port/libraries/libmyexample.so.1#libmyexample.so",
 conf);`
+3.  The MapReduce task can contain: `System.loadLibrary("myexample");`

Review Comment:
   +1. LGTM. One nit comments, we should mark which platform is it suitable, 
such as `Unix-like systems`. because it should be another approach for win32 
platform from the 
[document](https://docs.oracle.com/en/java/javase/11/docs/specs/jni/design.html#compiling-loading-and-linking-native-methods).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17254. DataNode httpServer has too many worker threads [hadoop]

2023-12-03 Thread via GitHub


Hexiaoqiao commented on PR #6307:
URL: https://github.com/apache/hadoop/pull/6307#issuecomment-1837448346

   @2005hithlj Thanks. Is it caused by enable webhdfs or http UI or some other 
reasons?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-16749. RBF: Gets the wrong directory information from Trash [hadoop]

2023-12-03 Thread via GitHub


Hexiaoqiao commented on PR #6317:
URL: https://github.com/apache/hadoop/pull/6317#issuecomment-1837447770

   Thanks @LiuGuH for your works.
   
   > (4) client via DFSRouter: hdfs dfs -ls /user/test-user/.Trash/Current 
,this will return
   
   I am confused why it returns wrong subdirs now? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17270: Fix ZKDelegationTokenSecretManagerImpl use closed zookeep… [hadoop]

2023-12-03 Thread via GitHub


Hexiaoqiao commented on PR #6315:
URL: https://github.com/apache/hadoop/pull/6315#issuecomment-1837445457

   @ThinkerLei Thanks for your work. Try to trigger CI manually, Let's wait 
what it will say.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11561. [Federation] GPG Supports Format PolicyStateStore. [hadoop]

2023-12-03 Thread via GitHub


slfan1989 commented on PR #6300:
URL: https://github.com/apache/hadoop/pull/6300#issuecomment-1837438799

   @goiri Thank you very much for your help in reviewing the code!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11561. [Federation] GPG Supports Format PolicyStateStore. [hadoop]

2023-12-03 Thread via GitHub


slfan1989 merged PR #6300:
URL: https://github.com/apache/hadoop/pull/6300


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17260. Fix the logic for reconfigure slow peer enable for Namenode. [hadoop]

2023-12-03 Thread via GitHub


ayushtkn merged PR #6279:
URL: https://github.com/apache/hadoop/pull/6279


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org