[jira] [Commented] (HADOOP-18832) Upgrade aws-java-sdk to 1.12.499+

2023-08-13 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17753898#comment-17753898
 ] 

ASF GitHub Bot commented on HADOOP-18832:
-

virajjasani commented on PR #5908:
URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1676685837

   i have repeated above steps for both `-Dscale -Dprefetch` as well as 
`-Dscale` and confirmed above 3 points for both rounds




> Upgrade aws-java-sdk to 1.12.499+
> -
>
> Key: HADOOP-18832
> URL: https://issues.apache.org/jira/browse/HADOOP-18832
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> aws sdk versions < 1.12.499 uses a vulnerable version of netty and hence 
> showing up in security CVE scans (CVE-2023-34462). The safe version for netty 
> is 4.1.94.Final and this is used by aws-java-sdk:1.12.499+



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5908: HADOOP-18832. Upgrade aws-java-sdk to 1.12.499

2023-08-13 Thread via GitHub


virajjasani commented on PR #5908:
URL: https://github.com/apache/hadoop/pull/5908#issuecomment-1676685837

   i have repeated above steps for both `-Dscale -Dprefetch` as well as 
`-Dscale` and confirmed above 3 points for both rounds


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5921: HDFS-17138 RBF: We changed the hadoop.security.auth_to_local configur…

2023-08-13 Thread via GitHub


Hexiaoqiao commented on code in PR #5921:
URL: https://github.com/apache/hadoop/pull/5921#discussion_r1292958875


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -81,7 +81,12 @@ class AbstractDelegationTokenSecretManagerhttps://github.com/apache/hadoop/blob/8d95c588d2df0048b0d3eb711d74bf34bf4ae3c4/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosName.java#L428-L430



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5937: HDFS-17150. EC: Fix the bug of failed lease recovery.

2023-08-13 Thread via GitHub


Hexiaoqiao commented on code in PR #5937:
URL: https://github.com/apache/hadoop/pull/5937#discussion_r1292951771


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -3802,16 +3803,26 @@ boolean internalReleaseLease(Lease lease, String src, 
INodesInPath iip,
 lastBlock.getBlockType());
   }
 
-  if (uc.getNumExpectedLocations() == 0 && lastBlock.getNumBytes() == 0) {
+  int minLocationsNum = 1;
+  if (lastBlock.isStriped()) {
+minLocationsNum = ((BlockInfoStriped) lastBlock).getRealDataBlockNum();
+  }
+  if (uc.getNumExpectedLocations() < minLocationsNum &&
+  lastBlock.getNumBytes() == 0) {
 // There is no datanode reported to this block.
 // may be client have crashed before writing data to pipeline.
 // This blocks doesn't need any recovery.
 // We can remove this block and close the file.
 pendingFile.removeLastBlock(lastBlock);
 finalizeINodeFileUnderConstruction(src, pendingFile,
 iip.getLatestSnapshotId(), false);
-NameNode.stateChangeLog.warn("BLOCK* internalReleaseLease: "
-+ "Removed empty last block and closed file " + src);
+if (uc.getNumExpectedLocations() == 0) {
+  NameNode.stateChangeLog.warn("BLOCK* internalReleaseLease: "
+  + "Removed empty last block and closed file " + src);
+} else {
+  NameNode.stateChangeLog.warn("BLOCK* internalReleaseLease: "

Review Comment:
   Totally true, but poor readability. Any other way to improve it, such as 
`lastBlock.isStriped()` or others?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangxiping1 commented on a diff in pull request #5921: HDFS-17138 RBF: We changed the hadoop.security.auth_to_local configur…

2023-08-13 Thread via GitHub


zhangxiping1 commented on code in PR #5921:
URL: https://github.com/apache/hadoop/pull/5921#discussion_r1292904295


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/AbstractDelegationTokenSecretManager.java:
##
@@ -81,7 +81,12 @@ class AbstractDelegationTokenSecretManager

[GitHub] [hadoop] zhangxiping1 commented on a diff in pull request #5921: HDFS-17138 RBF: We changed the hadoop.security.auth_to_local configur…

2023-08-13 Thread via GitHub


zhangxiping1 commented on code in PR #5921:
URL: https://github.com/apache/hadoop/pull/5921#discussion_r1292903312


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java:
##
@@ -376,4 +382,61 @@ public void testDelegationTokenIdentifierToString() throws 
Exception {
 " for SomeUser with renewer JobTracker",
 dtId.toStringStable());
   }
+
+  public static class MyDelegationTokenSecretManager extends
+  AbstractDelegationTokenSecretManager {
+/**
+ * Create a secret manager
+ *
+ * @param delegationKeyUpdateIntervalthe number of milliseconds 
for rolling
+ *   new secret keys.
+ * @param delegationTokenMaxLifetime the maximum lifetime of the 
delegation
+ *   tokens in milliseconds
+ * @param delegationTokenRenewInterval   how often the tokens must be 
renewed
+ *   in milliseconds
+ * @param delegationTokenRemoverScanInterval how often the tokens are 
scanned
+ *   for expired tokens in 
milliseconds
+ */
+public MyDelegationTokenSecretManager(long delegationKeyUpdateInterval,
+long delegationTokenMaxLifetime, long delegationTokenRenewInterval,
+long delegationTokenRemoverScanInterval) {
+  super(delegationKeyUpdateInterval,
+  delegationTokenMaxLifetime,
+  delegationTokenRenewInterval,
+  delegationTokenRemoverScanInterval);
+}
+
+@Override
+public DelegationTokenIdentifier createIdentifier() {
+  return null;
+}
+
+@Override
+public void logExpireTokens(Collection 
expiredTokens) throws IOException {
+  super.logExpireTokens(expiredTokens);
+}
+  }
+
+  @Test
+  public void testLogExpireTokensWhenChangeRules() {
+MyDelegationTokenSecretManager myDtSecretManager =
+new MyDelegationTokenSecretManager(10 * 1000, 10 * 1000, 10 * 1000, 10 
* 1000);
+setRules("RULE:[2:$1@$0](SomeUser.*)s/.*/SomeUser/");
+DelegationTokenIdentifier dtId = new DelegationTokenIdentifier(
+new Text("SomeUser/h...@example.com"),
+new Text("SomeUser/h...@example.com"),
+new Text("SomeUser/h...@example.com"));
+Set expiredTokens = new HashSet();
+expiredTokens.add(dtId);
+
+setRules("RULE:[2:$1@$0](OtherUser.*)s/.*/OtherUser/");
+// rules was modified, causing the existing tokens (May be loaded from 
other storage systems like zookeeper)
+// to fail to match the kerberos rules,
+// return an exception that cannot be handled
+try {
+  myDtSecretManager.logExpireTokens(expiredTokens);
+} catch (Exception e) {
+  Assert.fail("Exception in logExpireTokens");

Review Comment:
   After fixing the code, there will be no exceptions, and if there are 
exceptions, the code has not been fixed properly. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhangxiping1 commented on a diff in pull request #5921: HDFS-17138 RBF: We changed the hadoop.security.auth_to_local configur…

2023-08-13 Thread via GitHub


zhangxiping1 commented on code in PR #5921:
URL: https://github.com/apache/hadoop/pull/5921#discussion_r1292902814


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/security/TestDelegationToken.java:
##
@@ -376,4 +382,61 @@ public void testDelegationTokenIdentifierToString() throws 
Exception {
 " for SomeUser with renewer JobTracker",
 dtId.toStringStable());
   }
+
+  public static class MyDelegationTokenSecretManager extends
+  AbstractDelegationTokenSecretManager {
+/**
+ * Create a secret manager
+ *
+ * @param delegationKeyUpdateIntervalthe number of milliseconds 
for rolling
+ *   new secret keys.
+ * @param delegationTokenMaxLifetime the maximum lifetime of the 
delegation
+ *   tokens in milliseconds
+ * @param delegationTokenRenewInterval   how often the tokens must be 
renewed
+ *   in milliseconds
+ * @param delegationTokenRemoverScanInterval how often the tokens are 
scanned
+ *   for expired tokens in 
milliseconds
+ */
+public MyDelegationTokenSecretManager(long delegationKeyUpdateInterval,
+long delegationTokenMaxLifetime, long delegationTokenRenewInterval,
+long delegationTokenRemoverScanInterval) {
+  super(delegationKeyUpdateInterval,

Review Comment:
   Yes, the ExpiredTokenRemover thread will operate automatically, which is 
selected to be called automatically in the test case to determine if an 
exception has been generated. If an exception has been generated, it indicates 
that it has not been fixed.Any suggestions for improvement?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5905: [YARN-11421] Graceful Decommission ignores launched containers and gets deactivated before timeout

2023-08-13 Thread via GitHub


slfan1989 commented on PR #5905:
URL: https://github.com/apache/hadoop/pull/5905#issuecomment-1676489997

   @abhishekd0907 We need to fix checkstyle issue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5946: YARN-11154. Make router support proxy server.

2023-08-13 Thread via GitHub


hadoop-yetus commented on PR #5946:
URL: https://github.com/apache/hadoop/pull/5946#issuecomment-1676291541

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   4m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  compile  |   4m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m  8s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javac  |   4m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   4m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   4m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 55s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 49s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   0m 34s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 131m  2s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5946/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5946 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux 32959ad2fad1 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7172c5245463fbd09315cd6aa96816feaa97e4cd |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5946/2/testReport/ |
   | Max. process+thread count | 553 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn |
   | Console output | 

[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #5946: YARN-11154. Make router support proxy server.

2023-08-13 Thread via GitHub


zhengchenyu commented on code in PR #5946:
URL: https://github.com/apache/hadoop/pull/5946#discussion_r1292685883


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java:
##
@@ -623,4 +650,22 @@ public void initUserPipelineMap(Configuration conf) {
 YarnConfiguration.DEFAULT_ROUTER_PIPELINE_CACHE_MAX_SIZE);
 this.userPipelineMap = Collections.synchronizedMap(new 
LRUCacheHashMap<>(maxCacheSize, true));
   }
+
+  private URL getRedirectURL() throws Exception {
+Configuration conf = getConfig();
+String webAppAddress = WebAppUtils.getWebAppBindURL(conf, 
YarnConfiguration.ROUTER_BIND_HOST,
+WebAppUtils.getRouterWebAppURLWithoutScheme(conf));
+String[] hostPort = StringUtils.split(webAppAddress, ':');
+if (hostPort.length != 2) {
+  throw new YarnRuntimeException("Router can't get valid redirect proxy 
url");
+}
+String host;
+if (null == hostPort[0] || hostPort[0].equals("") || 
hostPort[0].equals("0.0.0.0")) {

Review Comment:
   refactor `null == hostPort[0] || hostPort[0].equals("")`  to 
`StringUtils.isBlank(hostPort[0])`.
   Is it ok?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #5946: YARN-11154. Make router support proxy server.

2023-08-13 Thread via GitHub


zhengchenyu commented on code in PR #5946:
URL: https://github.com/apache/hadoop/pull/5946#discussion_r1292685883


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java:
##
@@ -623,4 +650,22 @@ public void initUserPipelineMap(Configuration conf) {
 YarnConfiguration.DEFAULT_ROUTER_PIPELINE_CACHE_MAX_SIZE);
 this.userPipelineMap = Collections.synchronizedMap(new 
LRUCacheHashMap<>(maxCacheSize, true));
   }
+
+  private URL getRedirectURL() throws Exception {
+Configuration conf = getConfig();
+String webAppAddress = WebAppUtils.getWebAppBindURL(conf, 
YarnConfiguration.ROUTER_BIND_HOST,
+WebAppUtils.getRouterWebAppURLWithoutScheme(conf));
+String[] hostPort = StringUtils.split(webAppAddress, ':');
+if (hostPort.length != 2) {
+  throw new YarnRuntimeException("Router can't get valid redirect proxy 
url");
+}
+String host;
+if (null == hostPort[0] || hostPort[0].equals("") || 
hostPort[0].equals("0.0.0.0")) {

Review Comment:
   refactor `null == hostPort[0] || hostPort[0].equals("")`  to 
`StringUtils.isNotBlank(hostPort[0])`.
   Is it ok?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #5946: YARN-11154. Make router support proxy server.

2023-08-13 Thread via GitHub


zhengchenyu commented on code in PR #5946:
URL: https://github.com/apache/hadoop/pull/5946#discussion_r1292685574


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java:
##
@@ -318,7 +329,23 @@ public GetClusterNodeLabelsResponse getClusterNodeLabels(
   public GetApplicationReportResponse getApplicationReport(
   GetApplicationReportRequest request) throws YarnException, IOException {
 RequestInterceptorChainWrapper pipeline = getInterceptorChain();
-return pipeline.getRootInterceptor().getApplicationReport(request);
+GetApplicationReportResponse response = pipeline.getRootInterceptor()
+.getApplicationReport(request);
+if (getConfig().getBoolean(YarnConfiguration.ROUTER_WEBAPP_PROXY_ENABLE,

Review Comment:
   I will extract a new method.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #5946: YARN-11154. Make router support proxy server.

2023-08-13 Thread via GitHub


zhengchenyu commented on code in PR #5946:
URL: https://github.com/apache/hadoop/pull/5946#discussion_r1292685527


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/clientrm/RouterClientRMService.java:
##
@@ -157,6 +163,11 @@ protected void serviceStart() throws Exception {
 YarnConfiguration.DEFAULT_ROUTER_CLIENTRM_ADDRESS,
 YarnConfiguration.DEFAULT_ROUTER_CLIENTRM_PORT);
 
+if (getConfig().getBoolean(YarnConfiguration.ROUTER_WEBAPP_PROXY_ENABLE,

Review Comment:
   I will fix it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhengchenyu commented on a diff in pull request #5946: YARN-11154. Make router support proxy server.

2023-08-13 Thread via GitHub


zhengchenyu commented on code in PR #5946:
URL: https://github.com/apache/hadoop/pull/5946#discussion_r1292674937


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router/src/main/java/org/apache/hadoop/yarn/server/router/Router.java:
##
@@ -209,9 +215,30 @@ public void startWepApp() {
 
 Builder builder =
 WebApps.$for("cluster", null, null, "ws").with(conf).at(webAppAddress);
+if(conf.getBoolean(YarnConfiguration.ROUTER_WEBAPP_PROXY_ENABLE,
+YarnConfiguration.DEFAULT_ROUTER_WEBAPP_PROXY_ENABLE)) {
+  fetcher = new FedAppReportFetcher(conf);
+  builder.withServlet(ProxyUriUtils.PROXY_SERVLET_NAME, 
ProxyUriUtils.PROXY_PATH_SPEC,
+  WebAppProxyServlet.class);
+  builder.withAttribute(WebAppProxy.FETCHER_ATTRIBUTE, fetcher);
+  String proxyHostAndPort = getProxyHostAndPort(conf);
+  String[] proxyParts = proxyHostAndPort.split(":");
+  builder.withAttribute(WebAppProxy.PROXY_HOST_ATTRIBUTE, proxyParts[0]);
+}
 webApp = builder.start(new RouterWebApp(this));
   }
 
+  public static String getProxyHostAndPort(Configuration conf) {
+String addr = conf.get(YarnConfiguration.PROXY_ADDRESS);

Review Comment:
   We do not need a default value. In this function, router web address is just 
the default value.
   If PROXY_ADDRESS is configured, means that a standalone proxy server is 
deployed.
   If PROXY_ADDRESS is not configured, means that we will regard router web as 
proxy server.
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org