Re: [PR] HDFS-17218. NameNode should process time out excess redundancy blocks [hadoop]

2023-11-06 Thread via GitHub


haiyang1987 commented on code in PR #6176:
URL: https://github.com/apache/hadoop/pull/6176#discussion_r1384392878


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -3040,6 +3058,98 @@ void rescanPostponedMisreplicatedBlocks() {
   (Time.monotonicNow() - startTime), endSize, (startSize - endSize));
 }
   }
+
+  /**
+   * Sets the timeout (in seconds) for excess redundancy blocks, if the 
provided timeout is
+   * less than or equal to 0, the default value is used (converted to 
milliseconds).
+   * @param timeOut The time (in seconds) to set as the excess redundancy 
block timeout.
+   */
+  public void setExcessRedundancyTimeout(long timeOut) {
+if (timeOut <= 0) {
+  this.excessRedundancyTimeout = 
DFS_NAMENODE_EXCESS_REDUNDANCY_TIMEOUT_SEC * 1000L;
+} else {
+  this.excessRedundancyTimeout = timeOut * 1000L;
+}
+  }
+
+  /**
+   * Sets the limit number of blocks for checking excess redundancy timeout.
+   * If the provided limit is less than or equal to 0, the default limit is 
used.
+   *
+   * @param limit The limit number of blocks used to check for excess 
redundancy timeout.
+   */
+  public void setExcessRedundancyTimeoutCheckLimit(long limit) {
+if (excessRedundancyTimeoutCheckLimit <= 0) {
+  this.excessRedundancyTimeoutCheckLimit =
+  DFS_NAMENODE_EXCESS_REDUNDANCY_TIMEOUT_CHECK_LIMIT_DEFAULT;
+} else {
+  this.excessRedundancyTimeoutCheckLimit = limit;
+}
+  }
+
+  /**
+   * Process timed-out blocks in the excess redundancy map.
+   */
+  void processTimedOutExcessBlocks() {
+if (excessRedundancyMap.size() == 0) {
+  return;
+}
+namesystem.writeLock();
+long now = Time.monotonicNow();
+int processed = 0;
+try {
+  Iterator>> iter =
+  excessRedundancyMap.getExcessRedundancyMap().entrySet().iterator();
+  while (iter.hasNext() && processed < excessRedundancyTimeoutCheckLimit) {

Review Comment:
   Update PR.
   Hi @zhangshuyan0 please help me review it again when  you have free time. 
Thanks~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17218. NameNode should process time out excess redundancy blocks [hadoop]

2023-11-06 Thread via GitHub


haiyang1987 commented on code in PR #6176:
URL: https://github.com/apache/hadoop/pull/6176#discussion_r1384389728


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java:
##
@@ -3040,6 +3058,98 @@ void rescanPostponedMisreplicatedBlocks() {
   (Time.monotonicNow() - startTime), endSize, (startSize - endSize));
 }
   }
+
+  /**
+   * Sets the timeout (in seconds) for excess redundancy blocks, if the 
provided timeout is
+   * less than or equal to 0, the default value is used (converted to 
milliseconds).
+   * @param timeOut The time (in seconds) to set as the excess redundancy 
block timeout.
+   */
+  public void setExcessRedundancyTimeout(long timeOut) {
+if (timeOut <= 0) {
+  this.excessRedundancyTimeout = 
DFS_NAMENODE_EXCESS_REDUNDANCY_TIMEOUT_SEC * 1000L;
+} else {
+  this.excessRedundancyTimeout = timeOut * 1000L;
+}
+  }
+
+  /**
+   * Sets the limit number of blocks for checking excess redundancy timeout.
+   * If the provided limit is less than or equal to 0, the default limit is 
used.
+   *
+   * @param limit The limit number of blocks used to check for excess 
redundancy timeout.
+   */
+  public void setExcessRedundancyTimeoutCheckLimit(long limit) {
+if (excessRedundancyTimeoutCheckLimit <= 0) {
+  this.excessRedundancyTimeoutCheckLimit =
+  DFS_NAMENODE_EXCESS_REDUNDANCY_TIMEOUT_CHECK_LIMIT_DEFAULT;
+} else {
+  this.excessRedundancyTimeoutCheckLimit = limit;
+}
+  }
+
+  /**
+   * Process timed-out blocks in the excess redundancy map.
+   */
+  void processTimedOutExcessBlocks() {
+if (excessRedundancyMap.size() == 0) {
+  return;
+}
+namesystem.writeLock();
+long now = Time.monotonicNow();
+int processed = 0;
+try {
+  Iterator>> iter =
+  excessRedundancyMap.getExcessRedundancyMap().entrySet().iterator();
+  while (iter.hasNext() && processed < excessRedundancyTimeoutCheckLimit) {

Review Comment:
   Get it, i will update it later.
   
   Thanks @zhangshuyan0 for your comment.
   
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783473#comment-17783473
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

saxenapranav commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1384375876


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -861,6 +874,16 @@ private boolean checkUserError(int responseStatusCode) {
 && responseStatusCode < HttpURLConnection.HTTP_INTERNAL_ERROR);
   }
 
+  /**
+   * To check if the failure exception returned by server is due to MD5 
Mismatch
+   * @param e Exception returned by AbfsRestOperation
+   * @return boolean whether exception is due to MD5Mismatch or not
+   */
+  protected boolean isMd5ChecksumError(final AzureBlobFileSystemException e) {

Review Comment:
   lets have private.





> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2023-11-06 Thread via GitHub


saxenapranav commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1384375876


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -861,6 +874,16 @@ private boolean checkUserError(int responseStatusCode) {
 && responseStatusCode < HttpURLConnection.HTTP_INTERNAL_ERROR);
   }
 
+  /**
+   * To check if the failure exception returned by server is due to MD5 
Mismatch
+   * @param e Exception returned by AbfsRestOperation
+   * @return boolean whether exception is due to MD5Mismatch or not
+   */
+  protected boolean isMd5ChecksumError(final AzureBlobFileSystemException e) {

Review Comment:
   lets have private.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783472#comment-17783472
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

saxenapranav commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1384373412


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -879,9 +880,8 @@ private boolean checkUserError(int responseStatusCode) {
* @return boolean whether exception is due to MD5Mismatch or not
*/
   protected boolean isMd5ChecksumError(final AzureBlobFileSystemException e) {
-return ((AbfsRestOperationException) e).getStatusCode()
-== HttpURLConnection.HTTP_BAD_REQUEST
-&& e.getMessage().contains(MD5_ERROR_SERVER_MESSAGE);
+AzureServiceErrorCode storageErrorCode = ((AbfsRestOperationException) 
e).getErrorCode();

Review Comment:
   check if it can be case`e` is instanceof AbfsRestOperationException.





> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2023-11-06 Thread via GitHub


saxenapranav commented on code in PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#discussion_r1384373412


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -879,9 +880,8 @@ private boolean checkUserError(int responseStatusCode) {
* @return boolean whether exception is due to MD5Mismatch or not
*/
   protected boolean isMd5ChecksumError(final AzureBlobFileSystemException e) {
-return ((AbfsRestOperationException) e).getStatusCode()
-== HttpURLConnection.HTTP_BAD_REQUEST
-&& e.getMessage().contains(MD5_ERROR_SERVER_MESSAGE);
+AzureServiceErrorCode storageErrorCode = ((AbfsRestOperationException) 
e).getErrorCode();

Review Comment:
   check if it can be case`e` is instanceof AbfsRestOperationException.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11548. [Federation] Router Supports Format FederationStateStore. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6116:
URL: https://github.com/apache/hadoop/pull/6116#issuecomment-1797702495

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   2m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   4m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 51s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   4m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  35m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m 22s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 101m  8s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 33s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 263m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6116/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6116 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0cb80af38dd0 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / cd940bce55ed121a5698cc1cb102cf6f267d7565 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6116/8/testReport/ |
   | Max. process+thread count | 949 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router 
U: hadoop-yarn-project/hadoop-yarn/hado

Re: [PR] YARN-11011. Make YARN Router throw Exception to client clearly. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6211:
URL: https://github.com/apache/hadoop/pull/6211#issuecomment-1797092861

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   8m 29s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 58s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  trunk passed  |
   | -1 :x: |  shadedclient  |   5m  4s |  |  branch has errors when building 
and testing our client artifacts.  |
   | -0 :warning: |  patch  |   5m 20s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 23s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6211/11/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6211 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 01025f6d826f 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3fee020fc17dd4f9643c9c4a13536c5e3bcd458c |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6211/11/testReport/ |
   | Max. process+thread count | 553 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6211/11/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message

Re: [PR] YARN-11483. [Federation] Router AdminCLI Supports Clean Finish Apps. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6251:
URL: https://github.com/apache/hadoop/pull/6251#issuecomment-1796515129

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 17s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   7m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   7m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   5m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   9m 38s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  cc  |   7m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  cc  |   7m  5s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   7m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 51s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 6 new + 66 unchanged - 
0 fixed = 72 total (was 66)  |
   | +1 :green_heart: |  mvnsite  |   4m 43s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   4m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | -1 :x: |  spotbugs  |   2m 54s | 
[/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/4/artifact/out/new-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common.html)
 |  hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common generated 2 new + 0 
unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  40m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  7s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 34s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 28s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 105m 57s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | -1 :x: |  unit  |  28m 19s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6251/4/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client.txt)
 |  hadoop-yarn-client in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 38s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 370m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-yarn-project/hadoo

[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783379#comment-17783379
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

steveloughran commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1796422348

   checkstyle to fix before I merge
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3926:
final String key = pathToKey(to);: 'block' child has incorrect 
indentation level 12, expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3927:
final ObjectMetadata om = newObjectMetadata(file.length());: 
'block' child has incorrect indentation level 12, expected level should be 8. 
[Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3928:
Progressable progress = null;: 'block' child has incorrect 
indentation level 12, expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3929:
PutObjectRequest putObjectRequest = newPutObjectRequest(key, om, 
file);: 'block' child has incorrect indentation level 12, expected level should 
be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3930:
S3AFileSystem.this.invoker.retry(: 'block' child has incorrect 
indentation level 12, expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3935:
return null;: 'block' child has incorrect indentation level 12, 
expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3936:
  });: 'block rcurly' has incorrect indentation level 10, expected 
level should be 6. [Indentation]
   ```
   




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18925. S3A: option to enable/disable CopyFromLocalOperation (#6163) [hadoop]

2023-11-06 Thread via GitHub


steveloughran commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1796422348

   checkstyle to fix before I merge
   ```
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3926:
final String key = pathToKey(to);: 'block' child has incorrect 
indentation level 12, expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3927:
final ObjectMetadata om = newObjectMetadata(file.length());: 
'block' child has incorrect indentation level 12, expected level should be 8. 
[Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3928:
Progressable progress = null;: 'block' child has incorrect 
indentation level 12, expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3929:
PutObjectRequest putObjectRequest = newPutObjectRequest(key, om, 
file);: 'block' child has incorrect indentation level 12, expected level should 
be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3930:
S3AFileSystem.this.invoker.retry(: 'block' child has incorrect 
indentation level 12, expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3935:
return null;: 'block' child has incorrect indentation level 12, 
expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:3936:
  });: 'block rcurly' has incorrect indentation level 10, expected 
level should be 6. [Indentation]
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18874) ABFS: Adding Server returned request id in Exception method thrown to caller.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783374#comment-17783374
 ] 

ASF GitHub Bot commented on HADOOP-18874:
-

steveloughran merged PR #6004:
URL: https://github.com/apache/hadoop/pull/6004




> ABFS: Adding Server returned request id in Exception method thrown to caller.
> -
>
> Key: HADOOP-18874
> URL: https://issues.apache.org/jira/browse/HADOOP-18874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Each request made to Azure server has its unique ActivityId (rid) which is 
> returned in response of the request whether is succeed or fails.
> When a HDFS call fails due to an error from Azure service, An 
> ABFSRestOperationException is throws to the caller. This task is to add a 
> server returned activity id (rid) in the exception message which can be used 
> to investigate the failure on service side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18874: [ABFS] Adding Server returned request id in Exception Message thrown to caller. [hadoop]

2023-11-06 Thread via GitHub


steveloughran merged PR #6004:
URL: https://github.com/apache/hadoop/pull/6004


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783366#comment-17783366
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

hadoop-yetus commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1796394151

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 7 new + 3 unchanged - 0 fixed 
= 10 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 157m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux dc28b253f50e 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 1f0f1864ab89d0447b6f1a48323c8d3ed724f7f5 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/testReport/ |
   | Max. process+thread count | 575 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 

Re: [PR] HADOOP-18925. S3A: option to enable/disable CopyFromLocalOperation (#6163) [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1796394151

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  10m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 33s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  40m 23s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 7 new + 3 unchanged - 0 fixed 
= 10 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 157m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6259 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux dc28b253f50e 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 1f0f1864ab89d0447b6f1a48323c8d3ed724f7f5 |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/testReport/ |
   | Max. process+thread count | 575 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6259/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1796080821

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   9m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   3m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   2m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   2m 51s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/2/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 41s | 
[/results-checkstyle-hadoop-hdfs-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/2/artifact/out/results-checkstyle-hadoop-hdfs-project.txt)
 |  hadoop-hdfs-project: The patch generated 1 new + 45 unchanged - 0 fixed = 
46 total (was 45)  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 192m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 315m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDFSUtil |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5829/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5829 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux a7218a7ce8bd 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ff4b113ef084a1e1843518d19f0726ca2994b63a |
   | Defa

[jira] [Commented] (HADOOP-18874) ABFS: Adding Server returned request id in Exception method thrown to caller.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783325#comment-17783325
 ] 

ASF GitHub Bot commented on HADOOP-18874:
-

hadoop-yetus commented on PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#issuecomment-1795849410

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 135m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6004 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 01581b5c665e 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 
08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6bb403de1677533209964edaa5fa535ee77f8022 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/co

Re: [PR] HADOOP-18874: [ABFS] Adding Server returned request id in Exception Message thrown to caller. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#issuecomment-1795849410

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m  2s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 0 unchanged - 0 
fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 15s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 135m 44s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6004 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 01581b5c665e 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 
08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6bb403de1677533209964edaa5fa535ee77f8022 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/testReport/ |
   | Max. process+thread count | 555 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6004/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To resp

[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783324#comment-17783324
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

steveloughran commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1795836882

   full test run s3 london,  -Dparallel-tests -DtestsThreadCount=8 .. 11 
minutes!




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18925. S3A: option to enable/disable CopyFromLocalOperation (#6163) [hadoop]

2023-11-06 Thread via GitHub


steveloughran commented on PR #6259:
URL: https://github.com/apache/hadoop/pull/6259#issuecomment-1795836882

   full test run s3 london,  -Dparallel-tests -DtestsThreadCount=8 .. 11 
minutes!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-18925. S3A: option to enable/disable CopyFromLocalOperation (#6163) [hadoop]

2023-11-06 Thread via GitHub


steveloughran opened a new pull request, #6259:
URL: https://github.com/apache/hadoop/pull/6259

   
   backport of #6163 
   
   Add a new option:
   fs.s3a.optimized.copy.from.local.enabled
   
   This will enable (default) or disable the
   optimized CopyFromLocalOperation upload operation
   when copyFromLocalFile() is invoked.
   
   When false the superclass implementation is used; duration statistics are 
still collected, though audit span entries in logs will be for the individual 
fs operations, not the overall operation.
   
   Contributed by Steve Loughran
   
   
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783312#comment-17783312
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

steveloughran opened a new pull request, #6259:
URL: https://github.com/apache/hadoop/pull/6259

   
   backport of #6163 
   
   Add a new option:
   fs.s3a.optimized.copy.from.local.enabled
   
   This will enable (default) or disable the
   optimized CopyFromLocalOperation upload operation
   when copyFromLocalFile() is invoked.
   
   When false the superclass implementation is used; duration statistics are 
still collected, though audit span entries in logs will be for the individual 
fs operations, not the overall operation.
   
   Contributed by Steve Loughran
   
   
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783308#comment-17783308
 ] 

ASF GitHub Bot commented on HADOOP-18487:
-

steveloughran opened a new pull request, #6258:
URL: https://github.com/apache/hadoop/pull/6258

   
   backport of trunk pr #6185 
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Make protobuf 2.5 an optional runtime dependency.
> -
>
> Key: HADOOP-18487
> URL: https://issues.apache.org/jira/browse/HADOOP-18487
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, ipc
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in 
> HADOOP-17046
> while still keeping those files around (for a long time...), how about we 
> make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, 
> rather than *compile*
> that way, if apps want it for their own apis, they have to explicitly ask for 
> it, but at least our own scans don't break.
> i have no idea what will happen to the rest of the stack at this point, it 
> will be "interesting" to see



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-18487. Protobuf 2.5 removal part 2: stop exporting protobuf-2.5 (#6185) [hadoop]

2023-11-06 Thread via GitHub


steveloughran opened a new pull request, #6258:
URL: https://github.com/apache/hadoop/pull/6258

   
   backport of trunk pr #6185 
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.

2023-11-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18487:

Fix Version/s: 3.3.9

> Make protobuf 2.5 an optional runtime dependency.
> -
>
> Key: HADOOP-18487
> URL: https://issues.apache.org/jira/browse/HADOOP-18487
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, ipc
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in 
> HADOOP-17046
> while still keeping those files around (for a long time...), how about we 
> make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, 
> rather than *compile*
> that way, if apps want it for their own apis, they have to explicitly ask for 
> it, but at least our own scans don't break.
> i have no idea what will happen to the rest of the stack at this point, it 
> will be "interesting" to see



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18487) Make protobuf 2.5 an optional runtime dependency.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783305#comment-17783305
 ] 

ASF GitHub Bot commented on HADOOP-18487:
-

steveloughran merged PR #6185:
URL: https://github.com/apache/hadoop/pull/6185




> Make protobuf 2.5 an optional runtime dependency.
> -
>
> Key: HADOOP-18487
> URL: https://issues.apache.org/jira/browse/HADOOP-18487
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, ipc
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> uses of protobuf 2.5 and RpcEnginej have been deprecated since 3.3.0 in 
> HADOOP-17046
> while still keeping those files around (for a long time...), how about we 
> make the protobuf 2.5.0 export off hadoop common and hadoop-hdfs *provided*, 
> rather than *compile*
> that way, if apps want it for their own apis, they have to explicitly ask for 
> it, but at least our own scans don't break.
> i have no idea what will happen to the rest of the stack at this point, it 
> will be "interesting" to see



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18487. Protobuf 2.5 removal part 2: stop exporting protobuf-2.5" [hadoop]

2023-11-06 Thread via GitHub


steveloughran merged PR #6185:
URL: https://github.com/apache/hadoop/pull/6185


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18257) Analyzing S3A Audit Logs

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783304#comment-17783304
 ] 

ASF GitHub Bot commented on HADOOP-18257:
-

steveloughran commented on code in PR #6000:
URL: https://github.com/apache/hadoop/pull/6000#discussion_r1383731616


##
hadoop-tools/hadoop-aws/src/test/resources/TestAuditLogs/sampleLog1:
##
@@ -0,0 +1,36 @@
+183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a400 bucket-london 
[13/May/2021:11:26:06 +] 109.157.171.174 arn:aws:iam::152813717700:user/dev 
M7ZB7C4RTKXJKTM9 REST.PUT.OBJECT fork-0001/test/testParseBrokenCSVFile "PUT 
/fork-0001/test/testParseBrokenCSVFile HTTP/1.1" 200 - - 794 55 17 
"https://audit.example.org/hadoop/1/op_create/e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278/?op=op_create&p1=fork-0001/test/testParseBrokenCSVFile&pr=alice&ps=2eac5a04-2153-48db-896a-09bc9a2fd132&id=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278&t0=154&fs=e8ede3c7-8506-4a43-8268-fe8fcbb510a4&t1=156&ts=1620905165700";
 "Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK" - 
TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0= SigV4 
ECDHE-RSA-AES128-GCM-SHA256 AuthHeader bucket-london.s3.eu-west-2.amazonaws.com 
TLSv1.2";

Review Comment:
   add an excludes in apache-rat-plugin



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestS3AAuditLogMergerAndParser.java:
##
@@ -0,0 +1,273 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.Map;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.audit.mapreduce.S3AAuditLogMergerAndParser;
+
+/**
+ * This will implement different tests on S3AAuditLogMergerAndParser class.
+ */
+public class TestS3AAuditLogMergerAndParser extends AbstractS3ATestBase {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestS3AAuditLogMergerAndParser.class);
+
+  /**
+   * A real log entry.
+   * This is derived from a real log entry on a test run.
+   * If this needs to be updated, please do it from a real log.
+   * Splitting this up across lines has a tendency to break things, so
+   * be careful making changes.
+   */
+  static final String SAMPLE_LOG_ENTRY =
+  "183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a400"
+  + " bucket-london"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:user/dev"
+  + " M7ZB7C4RTKXJKTM9"
+  + " REST.PUT.OBJECT"
+  + " fork-0001/test/testParseBrokenCSVFile"
+  + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+  + " 200"
+  + " -"
+  + " -"
+  + " 794"
+  + " 55"
+  + " 17"
+  + " \"https://audit.example.org/hadoop/1/op_create/";
+  + "e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278/"
+  + "?op=op_create"
+  + "&p1=fork-0001/test/testParseBrokenCSVFile"
+  + "&pr=alice"
+  + "&ps=2eac5a04-2153-48db-896a-09bc9a2fd132"
+  + "&id=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278&t0=154"
+  + "&fs=e8ede3c7-8506-4a43-8268-fe8fcbb510a4&t1=156&"
+  + "ts=1620905165700\""
+  + " \"Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK\""
+  + " -"
+  + " TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0="
+  + " SigV4"
+  + " ECDHE-RSA-AES128-GCM-SHA256"
+  + " AuthHeader"
+  + " bucket-london.s3.eu-west-2.amazonaws.com"
+  + " TLSv1.2" + "\n";
+
+  static final String SAMPLE_LOG_ENTRY_1 =
+  "01234567890123456789"
+  + " bucket-london1"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:use

Re: [PR] HADOOP-18257. Merging and Parsing S3A audit logs into Avro format for analysis. [hadoop]

2023-11-06 Thread via GitHub


steveloughran commented on code in PR #6000:
URL: https://github.com/apache/hadoop/pull/6000#discussion_r1383731616


##
hadoop-tools/hadoop-aws/src/test/resources/TestAuditLogs/sampleLog1:
##
@@ -0,0 +1,36 @@
+183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a400 bucket-london 
[13/May/2021:11:26:06 +] 109.157.171.174 arn:aws:iam::152813717700:user/dev 
M7ZB7C4RTKXJKTM9 REST.PUT.OBJECT fork-0001/test/testParseBrokenCSVFile "PUT 
/fork-0001/test/testParseBrokenCSVFile HTTP/1.1" 200 - - 794 55 17 
"https://audit.example.org/hadoop/1/op_create/e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278/?op=op_create&p1=fork-0001/test/testParseBrokenCSVFile&pr=alice&ps=2eac5a04-2153-48db-896a-09bc9a2fd132&id=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278&t0=154&fs=e8ede3c7-8506-4a43-8268-fe8fcbb510a4&t1=156&ts=1620905165700";
 "Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK" - 
TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0= SigV4 
ECDHE-RSA-AES128-GCM-SHA256 AuthHeader bucket-london.s3.eu-west-2.amazonaws.com 
TLSv1.2";

Review Comment:
   add an excludes in apache-rat-plugin



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/audit/TestS3AAuditLogMergerAndParser.java:
##
@@ -0,0 +1,273 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a.audit;
+
+import java.io.File;
+import java.io.FileWriter;
+import java.io.IOException;
+import java.nio.file.Files;
+import java.util.Map;
+
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.AbstractS3ATestBase;
+import org.apache.hadoop.fs.s3a.audit.mapreduce.S3AAuditLogMergerAndParser;
+
+/**
+ * This will implement different tests on S3AAuditLogMergerAndParser class.
+ */
+public class TestS3AAuditLogMergerAndParser extends AbstractS3ATestBase {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(TestS3AAuditLogMergerAndParser.class);
+
+  /**
+   * A real log entry.
+   * This is derived from a real log entry on a test run.
+   * If this needs to be updated, please do it from a real log.
+   * Splitting this up across lines has a tendency to break things, so
+   * be careful making changes.
+   */
+  static final String SAMPLE_LOG_ENTRY =
+  "183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a400"
+  + " bucket-london"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:user/dev"
+  + " M7ZB7C4RTKXJKTM9"
+  + " REST.PUT.OBJECT"
+  + " fork-0001/test/testParseBrokenCSVFile"
+  + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+  + " 200"
+  + " -"
+  + " -"
+  + " 794"
+  + " 55"
+  + " 17"
+  + " \"https://audit.example.org/hadoop/1/op_create/";
+  + "e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278/"
+  + "?op=op_create"
+  + "&p1=fork-0001/test/testParseBrokenCSVFile"
+  + "&pr=alice"
+  + "&ps=2eac5a04-2153-48db-896a-09bc9a2fd132"
+  + "&id=e8ede3c7-8506-4a43-8268-fe8fcbb510a4-0278&t0=154"
+  + "&fs=e8ede3c7-8506-4a43-8268-fe8fcbb510a4&t1=156&"
+  + "ts=1620905165700\""
+  + " \"Hadoop 3.4.0-SNAPSHOT, java/1.8.0_282 vendor/AdoptOpenJDK\""
+  + " -"
+  + " TrIqtEYGWAwvu0h1N9WJKyoqM0TyHUaY+ZZBwP2yNf2qQp1Z/0="
+  + " SigV4"
+  + " ECDHE-RSA-AES128-GCM-SHA256"
+  + " AuthHeader"
+  + " bucket-london.s3.eu-west-2.amazonaws.com"
+  + " TLSv1.2" + "\n";
+
+  static final String SAMPLE_LOG_ENTRY_1 =
+  "01234567890123456789"
+  + " bucket-london1"
+  + " [13/May/2021:11:26:06 +]"
+  + " 109.157.171.174"
+  + " arn:aws:iam::152813717700:user/dev"
+  + " M7ZB7C4RTKXJKTM9"
+  + " REST.PUT.OBJECT"
+  + " fork-0001/test/testParseBrokenCSVFile"
+  + " \"PUT /fork-0001/test/testParseBrokenCSVFile HTTP/1.1\""
+  + " 200"
+  + " -"
+  + " -"
+  

Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-06 Thread via GitHub


steveloughran commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1383702079


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientTestUtil.java:
##
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.assertj.core.api.Assertions;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.hadoop.util.functional.FunctionRaisingIOE;
+
+import static java.net.HttpURLConnection.HTTP_OK;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.services.AuthType.OAuth;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.ArgumentMatchers.nullable;
+
+/**
+ * Utility class to help defining mock behavior on AbfsClient and 
AbfsRestOperation
+ * objects which are protected inside services package.
+ */
+public final class AbfsClientTestUtil {
+
+  private AbfsClientTestUtil() {
+
+  }
+
+  public static void setMockAbfsRestOperationForListPathOperation(
+  final AbfsClient spiedClient,
+  FunctionRaisingIOE 
functionRaisingIOE)
+  throws Exception {
+ExponentialRetryPolicy retryPolicy = 
Mockito.mock(ExponentialRetryPolicy.class);
+AbfsHttpOperation httpOperation = Mockito.mock(AbfsHttpOperation.class);
+AbfsRestOperation abfsRestOperation = Mockito.spy(new AbfsRestOperation(
+AbfsRestOperationType.ListPaths,
+spiedClient,
+HTTP_METHOD_GET,
+null,
+new ArrayList<>()
+));
+
+Mockito.doReturn(abfsRestOperation).when(spiedClient).getAbfsRestOperation(
+eq(AbfsRestOperationType.ListPaths), any(), any(), any());
+
+addMockBehaviourToAbfsClient(spiedClient, retryPolicy);
+addMockBehaviourToRestOpAndHttpOp(abfsRestOperation, httpOperation);
+
+functionRaisingIOE.apply(httpOperation);
+  }
+
+  public static void addMockBehaviourToRestOpAndHttpOp(final AbfsRestOperation 
abfsRestOperation,

Review Comment:
   the title isn't descriptive enough: what mock behaviour? add some javadocs 
atlest.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java:
##
@@ -62,34 +80,101 @@ public ITestAzureBlobFileSystemListStatus() throws 
Exception {
   public void testListPath() throws Exception {
 Configuration config = new Configuration(this.getRawConfiguration());
 config.set(AZURE_LIST_MAX_RESULTS, "5000");
-final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
-.newInstance(getFileSystem().getUri(), config);
-final List> tasks = new ArrayList<>();
-
-ExecutorService es = Executors.newFixedThreadPool(10);
-for (int i = 0; i < TEST_FILES_NUMBER; i++) {
-  final Path fileName = new Path("/test" + i);
-  Callable callable = new Callable() {
-@Override
-public Void call() throws Exception {
-  touch(fileName);
-  return null;
-}
-  };
-
-  tasks.add(es.submit(callable));
-}
-
-for (Future task : tasks) {
-  task.get();
+try (final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
+.newInstance(getFileSystem().getUri(), config)) {
+  final List> tasks = new ArrayList<>();
+
+  ExecutorService es = Executors.newFixedThreadPool(10);
+  for (int i = 0; i < TEST_FILES_NUMBER; i++) {
+final Path fileName = new Path("/test" + i);
+Callable callable = new Callable() {
+  @Override
+  public Void call() throws Exception {
+touch(fileName);
+return null;
+  }
+};
+
+tasks.add(es.submit(callable));
+  }
+
+  for (Future task : task

[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783302#comment-17783302
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

steveloughran commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1383702079


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/AbfsClientTestUtil.java:
##
@@ -0,0 +1,147 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+import java.net.HttpURLConnection;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.Set;
+import java.util.concurrent.locks.ReentrantLock;
+
+import org.assertj.core.api.Assertions;
+import org.mockito.Mockito;
+import org.mockito.invocation.InvocationOnMock;
+import org.mockito.stubbing.Answer;
+
+import org.apache.hadoop.fs.azurebfs.utils.TracingContext;
+import org.apache.hadoop.util.functional.FunctionRaisingIOE;
+
+import static java.net.HttpURLConnection.HTTP_OK;
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_METHOD_GET;
+import static org.apache.hadoop.fs.azurebfs.services.AuthType.OAuth;
+import static org.mockito.ArgumentMatchers.any;
+import static org.mockito.ArgumentMatchers.eq;
+import static org.mockito.ArgumentMatchers.nullable;
+
+/**
+ * Utility class to help defining mock behavior on AbfsClient and 
AbfsRestOperation
+ * objects which are protected inside services package.
+ */
+public final class AbfsClientTestUtil {
+
+  private AbfsClientTestUtil() {
+
+  }
+
+  public static void setMockAbfsRestOperationForListPathOperation(
+  final AbfsClient spiedClient,
+  FunctionRaisingIOE 
functionRaisingIOE)
+  throws Exception {
+ExponentialRetryPolicy retryPolicy = 
Mockito.mock(ExponentialRetryPolicy.class);
+AbfsHttpOperation httpOperation = Mockito.mock(AbfsHttpOperation.class);
+AbfsRestOperation abfsRestOperation = Mockito.spy(new AbfsRestOperation(
+AbfsRestOperationType.ListPaths,
+spiedClient,
+HTTP_METHOD_GET,
+null,
+new ArrayList<>()
+));
+
+Mockito.doReturn(abfsRestOperation).when(spiedClient).getAbfsRestOperation(
+eq(AbfsRestOperationType.ListPaths), any(), any(), any());
+
+addMockBehaviourToAbfsClient(spiedClient, retryPolicy);
+addMockBehaviourToRestOpAndHttpOp(abfsRestOperation, httpOperation);
+
+functionRaisingIOE.apply(httpOperation);
+  }
+
+  public static void addMockBehaviourToRestOpAndHttpOp(final AbfsRestOperation 
abfsRestOperation,

Review Comment:
   the title isn't descriptive enough: what mock behaviour? add some javadocs 
atlest.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemListStatus.java:
##
@@ -62,34 +80,101 @@ public ITestAzureBlobFileSystemListStatus() throws 
Exception {
   public void testListPath() throws Exception {
 Configuration config = new Configuration(this.getRawConfiguration());
 config.set(AZURE_LIST_MAX_RESULTS, "5000");
-final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
-.newInstance(getFileSystem().getUri(), config);
-final List> tasks = new ArrayList<>();
-
-ExecutorService es = Executors.newFixedThreadPool(10);
-for (int i = 0; i < TEST_FILES_NUMBER; i++) {
-  final Path fileName = new Path("/test" + i);
-  Callable callable = new Callable() {
-@Override
-public Void call() throws Exception {
-  touch(fileName);
-  return null;
-}
-  };
-
-  tasks.add(es.submit(callable));
-}
-
-for (Future task : tasks) {
-  task.get();
+try (final AzureBlobFileSystem fs = (AzureBlobFileSystem) FileSystem
+.newInstance(getFileSystem().getUri(), config)) {
+  final List> tasks = new ArrayList<>();
+
+  ExecutorService es = Executors.newFixedThreadPool(10);
+  for (int i = 0; i < TEST_FILES_NUMBER; i++) {
+final Path fileName = new Path("/test" + i);
+Callable ca

[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783297#comment-17783297
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

hadoop-yetus commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1795580186

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 25s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 
fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6019 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 827c40174939 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25cc6d9bc5f4b96757b71f77078f27c68837b26b |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/testReport/ |
   | Max. process+thread count | 557 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/

Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1795580186

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 25s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 4 unchanged - 0 
fixed = 5 total (was 4)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m 38s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6019 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 827c40174939 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25cc6d9bc5f4b96757b71f77078f27c68837b26b |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/testReport/ |
   | Max. process+thread count | 557 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6019/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To re

[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783291#comment-17783291
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

steveloughran commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1795548241

   reviewing
   @anujmodi2021 can you avoid doing rebase/forced push when we are in those 
final review stages; as long as there are no merge problems it helps reviewers 
as we can easily review changes since their last review. thanks




> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-06 Thread via GitHub


steveloughran commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1795548241

   reviewing
   @anujmodi2021 can you avoid doing rebase/forced push when we are in those 
final review stages; as long as there are no merge problems it helps reviewers 
as we can easily review changes since their last review. thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1795509471

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 32s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/19/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 3 new + 6 unchanged - 0 
fixed = 9 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  87m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5881 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 403f119f 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4dbfb2c8954fce305be00b23591a6cd1af98e88b |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/19/testReport/ |
   | Max. process+thread count | 554 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/19/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   |

[jira] [Commented] (HADOOP-18874) ABFS: Adding Server returned request id in Exception method thrown to caller.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783279#comment-17783279
 ] 

ASF GitHub Bot commented on HADOOP-18874:
-

anujmodi2021 commented on PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#issuecomment-1795467672

   Recent Test Run
   
   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:42->Assert.fail:89
 Files Copied value 2 above maximum 1
   [INFO] 
   [ERROR] Tests run: 339, Failures: 1, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 591, Failures: 0, Errors: 0, Skipped: 274
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 586, Failures: 0, Errors: 0, Skipped: 52
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 27 mins 20 secs.
   




> ABFS: Adding Server returned request id in Exception method thrown to caller.
> -
>
> Key: HADOOP-18874
> URL: https://issues.apache.org/jira/browse/HADOOP-18874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Each request made to Azure server has its unique ActivityId (rid) which is 
> returned in response of the request whether is succeed or fails.
> When a HDFS call fails due to an error from Azure service, An 
> ABFSRestOperationException is throws to the caller. This task is to add a 
> server returned activity id (rid) in the exception message which can be used 
> to investigate the failure on service side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18874: [ABFS] Adding Server returned request id in Exception Message thrown to caller. [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#issuecomment-1795467672

   Recent Test Run
   
   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testUpdateDeepDirectoryStructureToRemote:259->AbstractContractDistCpTest.distCpUpdateDeepDirectoryStructure:334->AbstractContractDistCpTest.assertCounterInRange:294->Assert.assertTrue:42->Assert.fail:89
 Files Copied value 2 above maximum 1
   [INFO] 
   [ERROR] Tests run: 339, Failures: 1, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 11
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 591, Failures: 0, Errors: 0, Skipped: 274
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 141, Failures: 0, Errors: 0, Skipped: 5
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 586, Failures: 0, Errors: 0, Skipped: 52
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 27 mins 20 secs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18874) ABFS: Adding Server returned request id in Exception method thrown to caller.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783267#comment-17783267
 ] 

ASF GitHub Bot commented on HADOOP-18874:
-

anujmodi2021 commented on PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#issuecomment-1795295793

   Thanks for the review @steveloughran 
   Added all the assertions as per  yoir advice.
   Checked for any reference not getting closed.




> ABFS: Adding Server returned request id in Exception method thrown to caller.
> -
>
> Key: HADOOP-18874
> URL: https://issues.apache.org/jira/browse/HADOOP-18874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Each request made to Azure server has its unique ActivityId (rid) which is 
> returned in response of the request whether is succeed or fails.
> When a HDFS call fails due to an error from Azure service, An 
> ABFSRestOperationException is throws to the caller. This task is to add a 
> server returned activity id (rid) in the exception message which can be used 
> to investigate the failure on service side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18874) ABFS: Adding Server returned request id in Exception method thrown to caller.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783266#comment-17783266
 ] 

ASF GitHub Bot commented on HADOOP-18874:
-

anujmodi2021 commented on code in PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#discussion_r1383581822


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java:
##
@@ -76,16 +78,17 @@ public void testAbfsRestOperationExceptionFormat() throws 
IOException {
   String[] errorFields = errorMessage.split(",");
   Assertions.assertThat(errorFields)
   .describedAs("fields in exception of %s", ex)
-  .hasSize(6);
+  .hasSize(7);
   // Check status message, status code, HTTP Request Type and URL.
   Assert.assertEquals("Operation failed: \"The specified path does not 
exist.\"", errorFields[0].trim());
   Assert.assertEquals("404", errorFields[1].trim());
   Assert.assertEquals("GET", errorFields[2].trim());
   Assert.assertTrue(errorFields[3].trim().startsWith("http"));
+  Assert.assertTrue(errorFields[4].trim().startsWith("rId:"));

Review Comment:
   Taken





> ABFS: Adding Server returned request id in Exception method thrown to caller.
> -
>
> Key: HADOOP-18874
> URL: https://issues.apache.org/jira/browse/HADOOP-18874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Each request made to Azure server has its unique ActivityId (rid) which is 
> returned in response of the request whether is succeed or fails.
> When a HDFS call fails due to an error from Azure service, An 
> ABFSRestOperationException is throws to the caller. This task is to add a 
> server returned activity id (rid) in the exception message which can be used 
> to investigate the failure on service side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18874: [ABFS] Adding Server returned request id in Exception Message thrown to caller. [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#issuecomment-1795295793

   Thanks for the review @steveloughran 
   Added all the assertions as per  yoir advice.
   Checked for any reference not getting closed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18874: [ABFS] Adding Server returned request id in Exception Message thrown to caller. [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on code in PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#discussion_r1383581475


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java:
##
@@ -60,12 +60,14 @@ public void testAbfsRestOperationExceptionFormat() throws 
IOException {
   String errorMessage = ex.getLocalizedMessage();
   String[] errorFields = errorMessage.split(",");
 
-  Assert.assertEquals(4, errorFields.length);
+  // Expected Fields are: Message, StatusCode, Method, URL, ActivityId(rId)
+  Assert.assertEquals(5, errorFields.length);
   // Check status message, status code, HTTP Request Type and URL.
   Assert.assertEquals("Operation failed: \"The specified path does not 
exist.\"", errorFields[0].trim());
   Assert.assertEquals("404", errorFields[1].trim());
   Assert.assertEquals("HEAD", errorFields[2].trim());
   Assert.assertTrue(errorFields[3].trim().startsWith("http"));
+  Assert.assertTrue(errorFields[4].trim().startsWith("rId:"));

Review Comment:
   Taken



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java:
##
@@ -76,16 +78,17 @@ public void testAbfsRestOperationExceptionFormat() throws 
IOException {
   String[] errorFields = errorMessage.split(",");
   Assertions.assertThat(errorFields)
   .describedAs("fields in exception of %s", ex)
-  .hasSize(6);
+  .hasSize(7);
   // Check status message, status code, HTTP Request Type and URL.
   Assert.assertEquals("Operation failed: \"The specified path does not 
exist.\"", errorFields[0].trim());
   Assert.assertEquals("404", errorFields[1].trim());
   Assert.assertEquals("GET", errorFields[2].trim());
   Assert.assertTrue(errorFields[3].trim().startsWith("http"));
+  Assert.assertTrue(errorFields[4].trim().startsWith("rId:"));
   // Check storage error code and storage error message.
-  Assert.assertEquals("PathNotFound", errorFields[4].trim());
-  Assert.assertTrue(errorFields[5].contains("RequestId")
-  && errorFields[5].contains("Time"));
+  Assert.assertEquals("PathNotFound", errorFields[5].trim());
+  Assert.assertTrue(errorFields[6].contains("RequestId")

Review Comment:
   Taken



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18874: [ABFS] Adding Server returned request id in Exception Message thrown to caller. [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on code in PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#discussion_r1383581822


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java:
##
@@ -76,16 +78,17 @@ public void testAbfsRestOperationExceptionFormat() throws 
IOException {
   String[] errorFields = errorMessage.split(",");
   Assertions.assertThat(errorFields)
   .describedAs("fields in exception of %s", ex)
-  .hasSize(6);
+  .hasSize(7);
   // Check status message, status code, HTTP Request Type and URL.
   Assert.assertEquals("Operation failed: \"The specified path does not 
exist.\"", errorFields[0].trim());
   Assert.assertEquals("404", errorFields[1].trim());
   Assert.assertEquals("GET", errorFields[2].trim());
   Assert.assertTrue(errorFields[3].trim().startsWith("http"));
+  Assert.assertTrue(errorFields[4].trim().startsWith("rId:"));

Review Comment:
   Taken



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18874) ABFS: Adding Server returned request id in Exception method thrown to caller.

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783265#comment-17783265
 ] 

ASF GitHub Bot commented on HADOOP-18874:
-

anujmodi2021 commented on code in PR #6004:
URL: https://github.com/apache/hadoop/pull/6004#discussion_r1383581475


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java:
##
@@ -60,12 +60,14 @@ public void testAbfsRestOperationExceptionFormat() throws 
IOException {
   String errorMessage = ex.getLocalizedMessage();
   String[] errorFields = errorMessage.split(",");
 
-  Assert.assertEquals(4, errorFields.length);
+  // Expected Fields are: Message, StatusCode, Method, URL, ActivityId(rId)
+  Assert.assertEquals(5, errorFields.length);
   // Check status message, status code, HTTP Request Type and URL.
   Assert.assertEquals("Operation failed: \"The specified path does not 
exist.\"", errorFields[0].trim());
   Assert.assertEquals("404", errorFields[1].trim());
   Assert.assertEquals("HEAD", errorFields[2].trim());
   Assert.assertTrue(errorFields[3].trim().startsWith("http"));
+  Assert.assertTrue(errorFields[4].trim().startsWith("rId:"));

Review Comment:
   Taken



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsRestOperationException.java:
##
@@ -76,16 +78,17 @@ public void testAbfsRestOperationExceptionFormat() throws 
IOException {
   String[] errorFields = errorMessage.split(",");
   Assertions.assertThat(errorFields)
   .describedAs("fields in exception of %s", ex)
-  .hasSize(6);
+  .hasSize(7);
   // Check status message, status code, HTTP Request Type and URL.
   Assert.assertEquals("Operation failed: \"The specified path does not 
exist.\"", errorFields[0].trim());
   Assert.assertEquals("404", errorFields[1].trim());
   Assert.assertEquals("GET", errorFields[2].trim());
   Assert.assertTrue(errorFields[3].trim().startsWith("http"));
+  Assert.assertTrue(errorFields[4].trim().startsWith("rId:"));
   // Check storage error code and storage error message.
-  Assert.assertEquals("PathNotFound", errorFields[4].trim());
-  Assert.assertTrue(errorFields[5].contains("RequestId")
-  && errorFields[5].contains("Time"));
+  Assert.assertEquals("PathNotFound", errorFields[5].trim());
+  Assert.assertTrue(errorFields[6].contains("RequestId")

Review Comment:
   Taken





> ABFS: Adding Server returned request id in Exception method thrown to caller.
> -
>
> Key: HADOOP-18874
> URL: https://issues.apache.org/jira/browse/HADOOP-18874
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Each request made to Azure server has its unique ActivityId (rid) which is 
> returned in response of the request whether is succeed or fails.
> When a HDFS call fails due to an error from Azure service, An 
> ABFSRestOperationException is throws to the caller. This task is to add a 
> server returned activity id (rid) in the exception message which can be used 
> to investigate the failure on service side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11611. Remove json-io to 4.14.1 due to CVE-2023-34610 [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6257:
URL: https://github.com/apache/hadoop/pull/6257#issuecomment-1795280112

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  |   9m 29s |  |  Docker failed to build run-specific 
yetus/hadoop:tp-11834}.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/6257 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6257/1/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783263#comment-17783263
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

steveloughran merged PR #6163:
URL: https://github.com/apache/hadoop/pull/6163




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783262#comment-17783262
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

steveloughran commented on PR #6163:
URL: https://github.com/apache/hadoop/pull/6163#issuecomment-1795212288

   thanks. I'll backport it as needed. 




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18925. S3A: option "fs.s3a.optimized.copy.from.local.enabled" to control CopyFromLocalOperation [hadoop]

2023-11-06 Thread via GitHub


steveloughran merged PR #6163:
URL: https://github.com/apache/hadoop/pull/6163


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18925. S3A: option "fs.s3a.optimized.copy.from.local.enabled" to control CopyFromLocalOperation [hadoop]

2023-11-06 Thread via GitHub


steveloughran commented on PR #6163:
URL: https://github.com/apache/hadoop/pull/6163#issuecomment-1795212288

   thanks. I'll backport it as needed. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783261#comment-17783261
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

anujmodi2021 commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1795209651

   Thanks for the review @steveloughran 
   Added fs.close() wherever applicable or enclosed FS creation inside a try.
   




> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1383567232


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -82,6 +82,11 @@ public class AbfsRestOperation {
*/
   private String failureReason;
 
+  /**
+   * This variable stores the tracing context used for last Rest Operation

Review Comment:
   Taken



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#issuecomment-1795209651

   Thanks for the review @steveloughran 
   Added fs.close() wherever applicable or enclosed FS creation inside a try.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783260#comment-17783260
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1383567232


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsRestOperation.java:
##
@@ -82,6 +82,11 @@ public class AbfsRestOperation {
*/
   private String failureReason;
 
+  /**
+   * This variable stores the tracing context used for last Rest Operation

Review Comment:
   Taken





> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18872) ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783258#comment-17783258
 ] 

ASF GitHub Bot commented on HADOOP-18872:
-

anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1383559946


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelete.java:
##
@@ -283,4 +288,24 @@ public void testDeleteIdempotencyTriggerHttp404() throws 
Exception {
 mockStore.delete(new Path("/NonExistingPath"), false, 
getTestTracingContext(fs, false));
   }
 
+  @Test
+  public void deleteBlobDirParallelThreadToDeleteOnDifferentTracingContext()
+  throws Exception {
+Configuration configuration = getRawConfiguration();
+AzureBlobFileSystem fs = Mockito.spy(
+(AzureBlobFileSystem) FileSystem.newInstance(configuration));
+AzureBlobFileSystemStore spiedStore = Mockito.spy(fs.getAbfsStore());
+AbfsClient spiedClient = Mockito.spy(fs.getAbfsClient());
+
+Mockito.doReturn(spiedStore).when(fs).getAbfsStore();
+spiedStore.setClient(spiedClient);
+
+fs.mkdirs(new Path("/testDir"));
+fs.create(new Path("/testDir/file1"));

Review Comment:
   Added fs.close()





> ABFS: Misreporting Retry Count for Sub-sequential and Parallel Operations
> -
>
> Key: HADOOP-18872
> URL: https://issues.apache.org/jira/browse/HADOOP-18872
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Anmol Asrani
>Assignee: Anuj Modi
>Priority: Major
>  Labels: Bug, pull-request-available
>
> There was a bug identified where retry count in the client correlation id was 
> wrongly reported for sub-sequential and parallel operations triggered by a 
> single file system call. This was due to reusing same tracing context for all 
> such calls.
> We create a new tracing context as soon as HDFS call comes. We keep on 
> passing that same TC for all the client calls.
> For instance, when we get a createFile call, we first call metadata 
> operations. If those metadata operations somehow succeeded after a few 
> retries, the tracing context will have that many retry count in it. Now when 
> actual call for create is made, same retry count will be used to construct 
> the headers(clientCorrelationId). Alhough the create operation never failed, 
> we will still see retry count from the previous request.
> Fix is to use a new tracing context object for all the network calls made. 
> All the sub-sequential and parallel operations will have same primary request 
> Id to correlate them, yet they will have their own tracing of retry count.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18872: [ABFS] [BugFix] Misreporting Retry Count for Sub-sequential and Parallel Operations [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on code in PR #6019:
URL: https://github.com/apache/hadoop/pull/6019#discussion_r1383559946


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemDelete.java:
##
@@ -283,4 +288,24 @@ public void testDeleteIdempotencyTriggerHttp404() throws 
Exception {
 mockStore.delete(new Path("/NonExistingPath"), false, 
getTestTracingContext(fs, false));
   }
 
+  @Test
+  public void deleteBlobDirParallelThreadToDeleteOnDifferentTracingContext()
+  throws Exception {
+Configuration configuration = getRawConfiguration();
+AzureBlobFileSystem fs = Mockito.spy(
+(AzureBlobFileSystem) FileSystem.newInstance(configuration));
+AzureBlobFileSystemStore spiedStore = Mockito.spy(fs.getAbfsStore());
+AbfsClient spiedClient = Mockito.spy(fs.getAbfsClient());
+
+Mockito.doReturn(spiedStore).when(fs).getAbfsStore();
+spiedStore.setClient(spiedClient);
+
+fs.mkdirs(new Path("/testDir"));
+fs.create(new Path("/testDir/file1"));

Review Comment:
   Added fs.close()



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] YARN-11611. Remove json-io to 4.14.1 due to CVE-2023-34610 [hadoop]

2023-11-06 Thread via GitHub


brumi1024 opened a new pull request, #6257:
URL: https://github.com/apache/hadoop/pull/6257

   
   
   ### Description of PR
   
   de.ruedigermoeller:fst only imports java-util (and through that json-io) as 
a test dependency, so I think we can safely add an exclusion for it.
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1795147719

   Test Results:
   
    AGGREGATED TEST RESULT 
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 9
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 603, Failures: 0, Errors: 0, Skipped: 276
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 44
   
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 137, Failures: 0, Errors: 0, Skipped: 3
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 339, Failures: 0, Errors: 0, Skipped: 41
   
   Time taken: 26 mins 5 secs.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11484. [Federation] Router Supports Yarn Client CLI Cmds. [hadoop]

2023-11-06 Thread via GitHub


slfan1989 commented on PR #6132:
URL: https://github.com/apache/hadoop/pull/6132#issuecomment-1795065263

   @goiri Can you help review this PR again? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17249. Fix TestDFSUtil.testIsValidName() unit test failure [hadoop]

2023-11-06 Thread via GitHub


GauthamBanasandra commented on PR #6249:
URL: https://github.com/apache/hadoop/pull/6249#issuecomment-1794988248

   Seems like the run was aborted due to a restart - 
   
   ```
   15:18:17  Elapsed:   8m 13s
   15:25:25  cd 
/c/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
   15:25:25  mvn --batch-mode -Ptest-patch -Pdist -Dtar -Pnative-win 
-Dhttps.protocols=TLSv1.2 -Drequire.openssl -Drequire.test.libhadoop 
-Dshell-executable=/c/Git/bin/bash.exe 
-Dopenssl.prefix=/c/vcpkg/installed/x64-windows 
-Dcmake.prefix.path=/c/vcpkg/installed/x64-windows 
-Dwindows.cmake.toolchain.file=/c/vcpkg/scripts/buildsystems/vcpkg.cmake 
-Dwindows.cmake.build.type=RelWithDebInfo -Dwindows.build.hdfspp.dll=off 
-Dwindows.no.sasl=on -Duse.platformToolsetVersion=v142 
-Dsurefire.rerunFailingTestsCount=2 -Pparallel-tests -Pshelltest 
-Drequire.test.libhadoop -Drequire.snappy -Pdist -Dtar -Pnative-win 
-Dhttps.protocols=TLSv1.2 -Drequire.openssl -Drequire.test.libhadoop 
-Dshell-executable=/c/Git/bin/bash.exe 
-Dopenssl.prefix=/c/vcpkg/installed/x64-windows 
-Dcmake.prefix.path=/c/vcpkg/installed/x64-windows 
-Dwindows.cmake.toolchain.file=/c/vcpkg/scripts/buildsystems/vcpkg.cmake 
-Dwindows.cmake.build.type=RelWithDebInfo -Dwindows.build.hdfspp.dll=off 
-Dwindows.no.sasl=on -Duse.platform
 ToolsetVersion=v142 clean test -fae > 
/c/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 2>&1
   Resuming build at Sun Nov 05 14:23:08 UTC 2023 after Jenkins restart
   Waiting to resume part of Apache Hadoop Nightly Build on Windows 10 #306: 
Finished waiting
   Waiting to resume part of Apache Hadoop Nightly Build on Windows 10 #306: 
Finished waiting
   Waiting to resume part of Apache Hadoop Nightly Build on Windows 10 #306: 
Jenkins is about to shut down
   ```
   
   
   I've started a new run - 
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-win10-x86_64/310/console


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-15413. add dfs.client.read.striped.datanode.max.attempts to fix read ecfile timeout [hadoop]

2023-11-06 Thread via GitHub


Neilxzn commented on PR #5829:
URL: https://github.com/apache/hadoop/pull/5829#issuecomment-1794890337

   > Please also check the checkstyle and blannks reported by Yetus. Thanks. 
@Neilxzn
   
   Fix these checkstyle  and add unit test. Please review it again. Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18925) S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable CopyFromLocalOperation

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783221#comment-17783221
 ] 

ASF GitHub Bot commented on HADOOP-18925:
-

steveloughran commented on PR #6163:
URL: https://github.com/apache/hadoop/pull/6163#issuecomment-1794849506

   @ahmarsuhail could i get a quick look at this? thanks




> S3A: add option "fs.s3a.copy.from.local.enabled" to enable/disable 
> CopyFromLocalOperation
> -
>
> Key: HADOOP-18925
> URL: https://issues.apache.org/jira/browse/HADOOP-18925
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.3.6
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> reported failure of CopyFromLocalOperation.getFinalPath() during job 
> submission with s3a declared as cluster fs.
> add an emergency option to disable this optimised uploader and revert to the 
> superclass implementation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18925. S3A: option "fs.s3a.optimized.copy.from.local.enabled" to control CopyFromLocalOperation [hadoop]

2023-11-06 Thread via GitHub


steveloughran commented on PR #6163:
URL: https://github.com/apache/hadoop/pull/6163#issuecomment-1794849506

   @ahmarsuhail could i get a quick look at this? thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls

2023-11-06 Thread Pranav Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783216#comment-17783216
 ] 

Pranav Saxena edited comment on HADOOP-18883 at 11/6/23 1:16 PM:
-

Hi [~ste...@apache.org].   Requesting you to kindly review the PR please. This 
is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be really 
awesome to get your feedback on this. Thank you so much.


was (Author: pranavsaxena):
Hi [~ste...@apache.org]   Requesting you to kindly review the PR please. This 
is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be really 
awesome to get your feedback on this. Thank you so much.

> Expect-100 JDK bug resolution: prevent multiple server calls
> 
>
> Key: HADOOP-18883
> URL: https://issues.apache.org/jira/browse/HADOOP-18883
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978].
>  
> With the current implementation of HttpURLConnection if server rejects the 
> “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be 
> thrown from 'expect100Continue()' method.
> After the exception thrown, If we call any other method on the same instance 
> (ex getHeaderField(), or getHeaderFields()). They will internally call 
> getOuputStream() which invokes writeRequests(), which make the actual server 
> call. 
> In the AbfsHttpOperation, after sendRequest() we call processResponse() 
> method from AbfsRestOperation. Even if the conn.getOutputStream() fails due 
> to expect-100 error, we consume the exception and let the code go ahead. So, 
> we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which 
> will be triggered after getOutputStream is failed. These invocation will lead 
> to server calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls

2023-11-06 Thread Pranav Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783216#comment-17783216
 ] 

Pranav Saxena commented on HADOOP-18883:


Hi [~ste...@apache.org]   Requesting you to kindly review the PR please. This 
is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be really 
awesome to get your feedback on this. Thank you so much.

> Expect-100 JDK bug resolution: prevent multiple server calls
> 
>
> Key: HADOOP-18883
> URL: https://issues.apache.org/jira/browse/HADOOP-18883
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978].
>  
> With the current implementation of HttpURLConnection if server rejects the 
> “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be 
> thrown from 'expect100Continue()' method.
> After the exception thrown, If we call any other method on the same instance 
> (ex getHeaderField(), or getHeaderFields()). They will internally call 
> getOuputStream() which invokes writeRequests(), which make the actual server 
> call. 
> In the AbfsHttpOperation, after sendRequest() we call processResponse() 
> method from AbfsRestOperation. Even if the conn.getOutputStream() fails due 
> to expect-100 error, we consume the exception and let the code go ahead. So, 
> we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which 
> will be triggered after getOutputStream is failed. These invocation will lead 
> to server calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18883. [ABFS]: Expect-100 JDK bug resolution: prevent multiple server calls [hadoop]

2023-11-06 Thread via GitHub


saxenapranav commented on PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#issuecomment-1794802932

   Hi @steveloughran @mehakmeet . Requesting you to kindly review the PR 
please. This is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be 
really awesome to get your feedback on this. Thank you so much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18883) Expect-100 JDK bug resolution: prevent multiple server calls

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783215#comment-17783215
 ] 

ASF GitHub Bot commented on HADOOP-18883:
-

saxenapranav commented on PR #6022:
URL: https://github.com/apache/hadoop/pull/6022#issuecomment-1794802932

   Hi @steveloughran @mehakmeet . Requesting you to kindly review the PR 
please. This is PR to prevent day-0 JDK bug around expect-100 in abfs. Would be 
really awesome to get your feedback on this. Thank you so much.




> Expect-100 JDK bug resolution: prevent multiple server calls
> 
>
> Key: HADOOP-18883
> URL: https://issues.apache.org/jira/browse/HADOOP-18883
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> This is inline to JDK bug: [https://bugs.openjdk.org/browse/JDK-8314978].
>  
> With the current implementation of HttpURLConnection if server rejects the 
> “Expect 100-continue” then there will be ‘java.net.ProtocolException’ will be 
> thrown from 'expect100Continue()' method.
> After the exception thrown, If we call any other method on the same instance 
> (ex getHeaderField(), or getHeaderFields()). They will internally call 
> getOuputStream() which invokes writeRequests(), which make the actual server 
> call. 
> In the AbfsHttpOperation, after sendRequest() we call processResponse() 
> method from AbfsRestOperation. Even if the conn.getOutputStream() fails due 
> to expect-100 error, we consume the exception and let the code go ahead. So, 
> we can have getHeaderField() / getHeaderFields() / getHeaderFieldLong() which 
> will be triggered after getOutputStream is failed. These invocation will lead 
> to server calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1794767195

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 21s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 45s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/18/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 3 new + 6 unchanged - 0 
fixed = 9 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m 51s | 
[/patch-unit-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/18/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  86m 13s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.fs.azurebfs.services.TestAbfsRestOperationMockFailures |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/18/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5881 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8a4781e4af6e 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f0634363932ec0df158287fc217936281cce349f |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/18/testReport/

[jira] [Commented] (HADOOP-18954) Filter NaN values from JMX json interface

2023-11-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18954?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17783210#comment-17783210
 ] 

ASF GitHub Bot commented on HADOOP-18954:
-

hadoop-yetus commented on PR #6229:
URL: https://github.com/apache/hadoop/pull/6229#issuecomment-1794740228

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |  16m 54s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/4/artifact/out/blanks-eol.txt)
 |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 174 
unchanged - 0 fixed = 176 total (was 174)  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  2s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 260m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6229 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux ffb428c5a6fa 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1449778f91ca8ca0ff249fa724848f6be3e101f9 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results

Re: [PR] HADOOP-18954. Filter NaN values from JMX json interface [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #6229:
URL: https://github.com/apache/hadoop/pull/6229#issuecomment-1794740228

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |  16m 49s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |  17m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |  16m 54s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/4/artifact/out/blanks-eol.txt)
 |  The patch has 4 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m 10s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/4/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 2 new + 174 
unchanged - 0 fixed = 176 total (was 174)  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   2m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m  2s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 260m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6229 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux ffb428c5a6fa 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1449778f91ca8ca0ff249fa724848f6be3e101f9 |
   | Default Java | Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6229/4/testReport/ |
   | Max. process+thread count | 1253 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output 

Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1794656946

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 28s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  39m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 44s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 15s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/17/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 11 new + 6 unchanged - 0 
fixed = 17 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 20s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/17/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 generated 2 new + 15 
unchanged - 0 fixed = 17 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 19s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/17/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt)
 |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05 
with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05 generated 2 new + 15 
unchanged - 0 fixed = 17 total (was 15)  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 57s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  94m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.43 ServerAPI=1.43 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/17/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5881 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 35bf4c03d870 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 
13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux |
   

Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-06 Thread via GitHub


anujmodi2021 commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1794640767

   @steveloughran 
   There was a PR from @snvijaya that was pending from a long time and this PR 
here has a dependency on that PR.
   Sneha's Pending PR: https://github.com/apache/hadoop/pull/3662 
   Here the Connection Timeout value along with read timeout was made 
configurable.
   
   In my PR we need this functionality because default 30seconds for Connection 
timeout is very high and should be configurable with proper default value. 
   
   We have done some scale workload testing where we have observed that for 
connection timeout along with using a Static Retry Policy decreasing the 
default connection timeout from 30s to 2s increases the performance.
   Also waiting for 30s to establish connection where something is actually 
with network doesn't make sense. Its better to fail faster and retry faster.
   
   Let me know if you have any thoughts on this default value.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop-18759: [ABFS][Backoff-Optimization] Have a Static retry policy for connection timeout. [hadoop]

2023-11-06 Thread via GitHub


hadoop-yetus commented on PR #5881:
URL: https://github.com/apache/hadoop/pull/5881#issuecomment-1794472911

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  49m  3s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 11s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 20s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 22s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  javac  |   0m 22s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.  |
   | -1 :x: |  compile  |   0m 20s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05.  |
   | -1 :x: |  javac  |   0m 20s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_382-8u382-ga-1~20.04.1-b05.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 8 new + 6 unchanged - 0 
fixed = 14 total (was 6)  |
   | -1 :x: |  mvnsite  |   0m 18s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  javadoc  |   0m 19s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5881/16/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04.txt)
 |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 
with JDK Ubuntu-11.0.20.1+1-post-Ubuntu-0ubuntu120.04 generated 2 new

Re: [PR] YARN-11608. Fix QueueCapacityVectorInfo NPE when accessible labels config is used. [hadoop]

2023-11-06 Thread via GitHub


brumi1024 merged PR #6250:
URL: https://github.com/apache/hadoop/pull/6250


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org