[jira] [Updated] (HADOOP-17775) Remove JavaScript package from Docker environment

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17775:

Labels: pull-request-available  (was: )

> Remove JavaScript package from Docker environment
> -
>
> Key: HADOOP-17775
> URL: https://issues.apache.org/jira/browse/HADOOP-17775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As described in the [README of 
> yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
>  required javascript modules are automatically pulled by 
> frontend-maven-plugin. We can leverage them for local testing too.
> While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
> node.js, the version of node.js does not match. JavaScript related packages 
> of the docker environment is not sure to work.
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17775) Remove JavaScript package from Docker environment

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17775?focusedWorklogId=614337=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614337
 ]

ASF GitHub Bot logged work on HADOOP-17775:
---

Author: ASF GitHub Bot
Created on: 24/Jun/21 05:58
Start Date: 24/Jun/21 05:58
Worklog Time Spent: 10m 
  Work Description: iwasakims opened a new pull request #3137:
URL: https://github.com/apache/hadoop/pull/3137


   https://issues.apache.org/jira/browse/HADOOP-17775
   
   As described in the [README of 
yarn-ui](https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md),
 required javascript modules are automatically pulled by frontend-maven-plugin. 
We can leverage them for local testing too.
   
   While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
node.js, the version of node.js does not match. JavaScript related packages of 
the docker environment is not sure to work.
   
   * 
https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
   * 
https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614337)
Remaining Estimate: 0h
Time Spent: 10m

> Remove JavaScript package from Docker environment
> -
>
> Key: HADOOP-17775
> URL: https://issues.apache.org/jira/browse/HADOOP-17775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As described in the [README of 
> yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
>  required javascript modules are automatically pulled by 
> frontend-maven-plugin. We can leverage them for local testing too.
> While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
> node.js, the version of node.js does not match. JavaScript related packages 
> of the docker environment is not sure to work.
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims opened a new pull request #3137: HADOOP-17775. Remove JavaScript package from Docker environment.

2021-06-23 Thread GitBox


iwasakims opened a new pull request #3137:
URL: https://github.com/apache/hadoop/pull/3137


   https://issues.apache.org/jira/browse/HADOOP-17775
   
   As described in the [README of 
yarn-ui](https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md),
 required javascript modules are automatically pulled by frontend-maven-plugin. 
We can leverage them for local testing too.
   
   While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
node.js, the version of node.js does not match. JavaScript related packages of 
the docker environment is not sure to work.
   
   * 
https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
   * 
https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17775) Remove JavaScript package from Docker environment

2021-06-23 Thread Masatake Iwasaki (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-17775:
--
Description: 
As described in the [README of 
yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
 required javascript modules are automatically pulled by frontend-maven-plugin. 
We can leverage them for local testing too.

While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
node.js, the version of node.js does not match. JavaScript related packages of 
the docker environment is not sure to work.

* 
https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
* 
https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290

  was:
As described in the [README of 
yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
 required javascript modules are automatically pulled by frontend-maven-plugin. 
We can leverage them for local testing too.

While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
node.js, the version of node.js does not match. JavaScript related packages of 
the docker environment is not sure to work.

* 
https://github.com/apache/hadoop/blob/538ce9c35403f0c8b595f42e835cc70c91c66621/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
* 
https://github.com/apache/hadoop/blob/538ce9c35403f0c8b595f42e835cc70c91c66621/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290


> Remove JavaScript package from Docker environment
> -
>
> Key: HADOOP-17775
> URL: https://issues.apache.org/jira/browse/HADOOP-17775
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>
> As described in the [README of 
> yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
>  required javascript modules are automatically pulled by 
> frontend-maven-plugin. We can leverage them for local testing too.
> While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
> node.js, the version of node.js does not match. JavaScript related packages 
> of the docker environment is not sure to work.
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
> * 
> https://github.com/apache/hadoop/blob/fdef2b4ccacb8753aac0f5625505181c9b4dc154/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17775) Remove JavaScript package from Docker environment

2021-06-23 Thread Masatake Iwasaki (Jira)
Masatake Iwasaki created HADOOP-17775:
-

 Summary: Remove JavaScript package from Docker environment
 Key: HADOOP-17775
 URL: https://issues.apache.org/jira/browse/HADOOP-17775
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


As described in the [README of 
yarn-ui|https://github.com/apache/hadoop/blob/trunk/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/README.md],
 required javascript modules are automatically pulled by frontend-maven-plugin. 
We can leverage them for local testing too.

While hadoop-yarn-ui and hadoop-yarn-applications-catalog-webapp is using 
node.js, the version of node.js does not match. JavaScript related packages of 
the docker environment is not sure to work.

* 
https://github.com/apache/hadoop/blob/538ce9c35403f0c8b595f42e835cc70c91c66621/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui/pom.xml#L170-L212
* 
https://github.com/apache/hadoop/blob/538ce9c35403f0c8b595f42e835cc70c91c66621/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-catalog/hadoop-yarn-applications-catalog-webapp/pom.xml#L264-L290



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17290) ABFS: Add Identifiers to Client Request Header

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17290?focusedWorklogId=614331=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614331
 ]

ASF GitHub Bot logged work on HADOOP-17290:
---

Author: ASF GitHub Bot
Created on: 24/Jun/21 05:39
Start Date: 24/Jun/21 05:39
Worklog Time Spent: 10m 
  Work Description: anoopsjohn commented on a change in pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#discussion_r657346332



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1071,7 +1155,10 @@ private boolean fileSystemExists() throws IOException {
 LOG.debug(
 "AzureBlobFileSystem.fileSystemExists uri: {}", uri);
 try {
-  abfsStore.getFilesystemProperties();
+  TracingContext tracingContext = new TracingContext(clientCorrelationID,
+  fileSystemID, HdfsOperationConstants.GET_FILESTATUS,

Review comment:
   GET_FILESTATUS op?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
##
@@ -264,6 +266,10 @@
   DefaultValue = DEFAULT_VALUE_UNKNOWN)
   private String clusterType;
 
+  @StringConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_CLIENT_CORRELATIONID,
+  DefaultValue = EMPTY_STRING)
+  private String clientCorrelationID;

Review comment:
   clientCorrelationId ?   To be similar as 'userAgentId' etc?  And the 
getter also

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -335,7 +361,10 @@ public boolean rename(final Path src, final Path dst) 
throws IOException {
 }
 
 // Non-HNS account need to check dst status on driver side.
-if (!abfsStore.getIsNamespaceEnabled() && dstFileStatus == null) {
+TracingContext tracingContext = new TracingContext(clientCorrelationID,
+fileSystemID, HdfsOperationConstants.RENAME, true, 
tracingContextFormat,
+listener);
+if (!abfsStore.getIsNamespaceEnabled(tracingContext) && dstFileStatus == 
null) {

Review comment:
   Within tryGetFileStatus() there is call to getFileStatus.  We should be 
using this context created here.  
   tryGetFileStatus() been called by createNonRecursive API also.
   Have to handle these.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1049,8 +1130,11 @@ public boolean exists(Path f) throws IOException {
   throws IOException {
 LOG.debug("AzureBlobFileSystem.listStatusIterator path : {}", path);
 if (abfsStore.getAbfsConfiguration().enableAbfsListIterator()) {
+  TracingContext tracingContext = new TracingContext(clientCorrelationID,
+  fileSystemID, HdfsOperationConstants.LISTSTATUS, true,
+  tracingContextFormat, listener);
   AbfsListStatusRemoteIterator abfsLsItr =
-  new AbfsListStatusRemoteIterator(getFileStatus(path), abfsStore);
+  new AbfsListStatusRemoteIterator(getFileStatus(path), abfsStore, 
tracingContext);

Review comment:
   Here is a call to getFileStatus(Path) but as part of LISTSTATUS op.  So 
we should the context created above , during getFileStatus also right?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -451,15 +472,15 @@ private int readInternal(final long position, final 
byte[] b, final int offset,
   }
 
   // got nothing from read-ahead, do our own read now
-  receivedBytes = readRemote(position, b, offset, length);
+  receivedBytes = readRemote(position, b, offset, length, new 
TracingContext(tracingContext));
   return receivedBytes;
 } else {
   LOG.debug("read ahead disabled, reading remote");
-  return readRemote(position, b, offset, length);
+  return readRemote(position, b, offset, length, new 
TracingContext(tracingContext));
 }
   }
 
-  int readRemote(long position, byte[] b, int offset, int length) throws 
IOException {
+  int readRemote(long position, byte[] b, int offset, int length, 
TracingContext tracingContext) throws IOException {

Review comment:
   Why passing 'tracingContext' when its set as instance member?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -109,6 +109,12 @@
*  Default value of this config is true. **/
   public static final String FS_AZURE_DISABLE_OUTPUTSTREAM_FLUSH = 
"fs.azure.disable.outputstream.flush";
   public static final String FS_AZURE_USER_AGENT_PREFIX_KEY = 
"fs.azure.user.agent.prefix";
+  /**
+   * The client correlation ID provided over config that will be added to
+   * x-ms-client-request-Id header. Defaults to empty string if 

[GitHub] [hadoop] anoopsjohn commented on a change in pull request #2520: HADOOP-17290. ABFS: Add Identifiers to Client Request Header

2021-06-23 Thread GitBox


anoopsjohn commented on a change in pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#discussion_r657346332



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1071,7 +1155,10 @@ private boolean fileSystemExists() throws IOException {
 LOG.debug(
 "AzureBlobFileSystem.fileSystemExists uri: {}", uri);
 try {
-  abfsStore.getFilesystemProperties();
+  TracingContext tracingContext = new TracingContext(clientCorrelationID,
+  fileSystemID, HdfsOperationConstants.GET_FILESTATUS,

Review comment:
   GET_FILESTATUS op?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
##
@@ -264,6 +266,10 @@
   DefaultValue = DEFAULT_VALUE_UNKNOWN)
   private String clusterType;
 
+  @StringConfigurationValidatorAnnotation(ConfigurationKey = 
FS_AZURE_CLIENT_CORRELATIONID,
+  DefaultValue = EMPTY_STRING)
+  private String clientCorrelationID;

Review comment:
   clientCorrelationId ?   To be similar as 'userAgentId' etc?  And the 
getter also

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -335,7 +361,10 @@ public boolean rename(final Path src, final Path dst) 
throws IOException {
 }
 
 // Non-HNS account need to check dst status on driver side.
-if (!abfsStore.getIsNamespaceEnabled() && dstFileStatus == null) {
+TracingContext tracingContext = new TracingContext(clientCorrelationID,
+fileSystemID, HdfsOperationConstants.RENAME, true, 
tracingContextFormat,
+listener);
+if (!abfsStore.getIsNamespaceEnabled(tracingContext) && dstFileStatus == 
null) {

Review comment:
   Within tryGetFileStatus() there is call to getFileStatus.  We should be 
using this context created here.  
   tryGetFileStatus() been called by createNonRecursive API also.
   Have to handle these.

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java
##
@@ -1049,8 +1130,11 @@ public boolean exists(Path f) throws IOException {
   throws IOException {
 LOG.debug("AzureBlobFileSystem.listStatusIterator path : {}", path);
 if (abfsStore.getAbfsConfiguration().enableAbfsListIterator()) {
+  TracingContext tracingContext = new TracingContext(clientCorrelationID,
+  fileSystemID, HdfsOperationConstants.LISTSTATUS, true,
+  tracingContextFormat, listener);
   AbfsListStatusRemoteIterator abfsLsItr =
-  new AbfsListStatusRemoteIterator(getFileStatus(path), abfsStore);
+  new AbfsListStatusRemoteIterator(getFileStatus(path), abfsStore, 
tracingContext);

Review comment:
   Here is a call to getFileStatus(Path) but as part of LISTSTATUS op.  So 
we should the context created above , during getFileStatus also right?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsInputStream.java
##
@@ -451,15 +472,15 @@ private int readInternal(final long position, final 
byte[] b, final int offset,
   }
 
   // got nothing from read-ahead, do our own read now
-  receivedBytes = readRemote(position, b, offset, length);
+  receivedBytes = readRemote(position, b, offset, length, new 
TracingContext(tracingContext));
   return receivedBytes;
 } else {
   LOG.debug("read ahead disabled, reading remote");
-  return readRemote(position, b, offset, length);
+  return readRemote(position, b, offset, length, new 
TracingContext(tracingContext));
 }
   }
 
-  int readRemote(long position, byte[] b, int offset, int length) throws 
IOException {
+  int readRemote(long position, byte[] b, int offset, int length, 
TracingContext tracingContext) throws IOException {

Review comment:
   Why passing 'tracingContext' when its set as instance member?

##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -109,6 +109,12 @@
*  Default value of this config is true. **/
   public static final String FS_AZURE_DISABLE_OUTPUTSTREAM_FLUSH = 
"fs.azure.disable.outputstream.flush";
   public static final String FS_AZURE_USER_AGENT_PREFIX_KEY = 
"fs.azure.user.agent.prefix";
+  /**
+   * The client correlation ID provided over config that will be added to
+   * x-ms-client-request-Id header. Defaults to empty string if the length and
+   * character constraints are not satisfied. **/
+  public static final String FS_AZURE_CLIENT_CORRELATIONID = 
"fs.azure.client.correlationid";
+  public static final String FS_AZURE_TRACINGCONTEXT_FORMAT = 
"fs.azure.tracingcontext.format";

Review comment:
   This is the tracing header format right?  Will that be a better name?

##
File path: 

[GitHub] [hadoop] akshatb1 commented on pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor

2021-06-23 Thread GitBox


akshatb1 commented on pull request #3135:
URL: https://github.com/apache/hadoop/pull/3135#issuecomment-867331271


   @goiri @bibinchundatt Could you kindly help in reviewing this PR? Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3117: HDFS-16076. Avoid using slow DataNodes for reading by sorting locations

2021-06-23 Thread GitBox


tomscut commented on pull request #3117:
URL: https://github.com/apache/hadoop/pull/3117#issuecomment-867285100


   > Merged it. Thanks for your contribution, @tomscut.
   
   Thanks @tasanuma .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #3117: HDFS-16076. Avoid using slow DataNodes for reading by sorting locations

2021-06-23 Thread GitBox


tasanuma commented on pull request #3117:
URL: https://github.com/apache/hadoop/pull/3117#issuecomment-867284477


   Merged it. Thanks for your contribution, @tomscut.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma merged pull request #3117: HDFS-16076. Avoid using slow DataNodes for reading by sorting locations

2021-06-23 Thread GitBox


tasanuma merged pull request #3117:
URL: https://github.com/apache/hadoop/pull/3117


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut opened a new pull request #3136: HDFS-16086. Add volume information to datanode log for tracing

2021-06-23 Thread GitBox


tomscut opened a new pull request #3136:
URL: https://github.com/apache/hadoop/pull/3136


   JIRA: [HDFS-16086](https://issues.apache.org/jira/browse/HDFS-16086)
   
   To keep track of the block in volume, we can add the volume information to 
the datanode log.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-867160860


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  4s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 39s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 79ac4e35adbe 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 34525f1d7b429ff71f76c3d1ded5ea4441120239 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/4/testReport/ |
   | Max. process+thread count | 546 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] hadoop-yetus commented on pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3135:
URL: https://github.com/apache/hadoop/pull/3135#issuecomment-867156221


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 49s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   8m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   8m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 53s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   7m 53s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 41s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/2/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 33 new + 215 unchanged 
- 0 fixed = 248 total (was 215)  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  6s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 45s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 124m 51s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3135 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a27bd591f02b 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8e3e39229d5c926abcdf9191e3b56a809e12 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/2/testReport/ |
   | Max. process+thread count | 900 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn |
   | Console output | 

[jira] [Work logged] (HADOOP-17745) ADLS client can throw an IOException when it should throw an InterruptedIOException

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17745?focusedWorklogId=614214=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614214
 ]

ASF GitHub Bot logged work on HADOOP-17745:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 20:35
Start Date: 23/Jun/21 20:35
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3076:
URL: https://github.com/apache/hadoop/pull/3076#issuecomment-867139175


   > Hi @steveloughran, what tests should I run in order to make this change?
   
   all the abfs tests in hadoop-azure.  Ideally with as many auth options as 
you can. 
   `hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh` helps 
there.
   
   At the very least:
   ```
   mvn clean verify  -Dparallel-tests=abfs -DtestsThreadCount=5 -Dscale
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614214)
Time Spent: 1.5h  (was: 1h 20m)

> ADLS client can throw an IOException when it should throw an 
> InterruptedIOException
> ---
>
> Key: HADOOP-17745
> URL: https://issues.apache.org/jira/browse/HADOOP-17745
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Maynard
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> The Azure client sometimes throws an IOException with an InterruptedException 
> cause which can be converted to an InterruptedIOException. This is important 
> for downstream consumers that rely on an InterruptedIOException to gracefully 
> close.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3076: HADOOP-17745. Wrap IOException with InterruptedException cause properly

2021-06-23 Thread GitBox


steveloughran commented on pull request #3076:
URL: https://github.com/apache/hadoop/pull/3076#issuecomment-867139175


   > Hi @steveloughran, what tests should I run in order to make this change?
   
   all the abfs tests in hadoop-azure.  Ideally with as many auth options as 
you can. 
   `hadoop-tools/hadoop-azure/dev-support/testrun-scripts/runtests.sh` helps 
there.
   
   At the very least:
   ```
   mvn clean verify  -Dparallel-tests=abfs -DtestsThreadCount=5 -Dscale
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17745) ADLS client can throw an IOException when it should throw an InterruptedIOException

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17745?focusedWorklogId=614213=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614213
 ]

ASF GitHub Bot logged work on HADOOP-17745:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 20:32
Start Date: 23/Jun/21 20:32
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3076:
URL: https://github.com/apache/hadoop/pull/3076#discussion_r657439098



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java
##
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.assertj.core.api.Assertions;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestIOUtilsWrapExceptionSuite extends AbstractHadoopTestBase {
+@Test
+public void testWrapExceptionWithInterruptedException() throws Exception {
+InterruptedIOException inputException = new 
InterruptedIOException("message");
+NullPointerException causeException = new 
NullPointerException("cause");
+inputException.initCause(causeException);
+Exception outputException = IOUtils.wrapException("path", 
"methodName", inputException);
+
+// The new exception should retain the input message, cause, and type
+
Assertions.assertThat(outputException).isInstanceOf(InterruptedIOException.class);
+
Assertions.assertThat(outputException.getCause()).isInstanceOf(NullPointerException.class);

Review comment:
   can you add a .describedAs("inner cause")

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java
##
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+
+import static junit.framework.TestCase.assertEquals;

Review comment:
   nit: can you copy the import ordering of (most) of the hadoop code.
   
   java*
   
   non-org.apache. (though we are hadoop shaded guava is now in there too)
   
   org.apache.*
   
   static *
   
   thanks

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java
##
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the 

[GitHub] [hadoop] steveloughran commented on a change in pull request #3076: HADOOP-17745. Wrap IOException with InterruptedException cause properly

2021-06-23 Thread GitBox


steveloughran commented on a change in pull request #3076:
URL: https://github.com/apache/hadoop/pull/3076#discussion_r657439098



##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java
##
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.assertj.core.api.Assertions;
+import org.junit.Assert;
+import org.junit.Test;
+
+public class TestIOUtilsWrapExceptionSuite extends AbstractHadoopTestBase {
+@Test
+public void testWrapExceptionWithInterruptedException() throws Exception {
+InterruptedIOException inputException = new 
InterruptedIOException("message");
+NullPointerException causeException = new 
NullPointerException("cause");
+inputException.initCause(causeException);
+Exception outputException = IOUtils.wrapException("path", 
"methodName", inputException);
+
+// The new exception should retain the input message, cause, and type
+
Assertions.assertThat(outputException).isInstanceOf(InterruptedIOException.class);
+
Assertions.assertThat(outputException.getCause()).isInstanceOf(NullPointerException.class);

Review comment:
   can you add a .describedAs("inner cause")

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java
##
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+
+import static junit.framework.TestCase.assertEquals;

Review comment:
   nit: can you copy the import ordering of (most) of the hadoop code.
   
   java*
   
   non-org.apache. (though we are hadoop shaded guava is now in there too)
   
   org.apache.*
   
   static *
   
   thanks

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestIOUtilsWrapExceptionSuite.java
##
@@ -0,0 +1,60 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.io;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+
+import static junit.framework.TestCase.assertEquals;
+import static junit.framework.TestCase.assertTrue;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.assertj.core.api.Assertions;
+import org.junit.Assert;

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2971: MAPREDUCE-7341. Intermediate Manifest Committer

2021-06-23 Thread GitBox


hadoop-yetus removed a comment on pull request #2971:
URL: https://github.com/apache/hadoop/pull/2971#issuecomment-861003926


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  2s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 21 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  5s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  4s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 41s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  14m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 21s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/15/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 2 new + 1983 unchanged - 1 
fixed = 1985 total (was 1984)  |
   | +1 :green_heart: |  compile  |  18m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  18m  7s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/15/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 2 new + 1858 
unchanged - 1 fixed = 1860 total (was 1859)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/15/artifact/out/blanks-eol.txt)
 |  The patch has 7 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 46s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/15/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 59 new + 0 unchanged - 0 fixed = 59 total (was 0) 
 |
   | +1 :green_heart: |  mvnsite  |   4m  7s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  7s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 40s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   1m 43s | 
[/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2971/15/artifact/out/new-spotbugs-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.html)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  15m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 37s |  |  

[GitHub] [hadoop] steveloughran commented on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


steveloughran commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-867117923


   Tests in progress, s3 london, endpoint and region unset, `-Dparallel-tests 
-DtestsThreadCount=7 -Dmarkers=keep`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


steveloughran commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-867117275


   Latest patch warns user on fallback, with the LogExactlyOnce class to stop 
it being over-noisy if someone really, really wants to use this "feature".
   
   Also the latest stack trace is in, as well as the hadoop-3.3.1 one. I've 
also added the workaround info to the JIRA description as it'll probably be the 
first entry google will find for this


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


hadoop-yetus removed a comment on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-867078722


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  4s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 994fb767e1cf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ac0a6104c97d948f004cac78bea24b8dba36b605 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/testReport/ |
   | Max. process+thread count | 720 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on 

[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=614191=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614191
 ]

ASF GitHub Bot logged work on HADOOP-17764:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 19:52
Start Date: 23/Jun/21 19:52
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3109:
URL: https://github.com/apache/hadoop/pull/3109#discussion_r657412668



##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AInputStreamRetry.java
##
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import javax.net.ssl.SSLException;
+import java.io.IOException;
+import java.net.SocketException;
+import java.nio.charset.Charset;
+
+import com.amazonaws.services.s3.model.GetObjectRequest;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.S3Object;
+import com.amazonaws.services.s3.model.S3ObjectInputStream;
+import org.junit.Test;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.audit.impl.NoopSpan;
+import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecrets;
+import org.apache.hadoop.fs.s3a.impl.ChangeDetectionPolicy;
+
+import static java.lang.Math.min;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Tests S3AInputStream retry behavior on read failure.
+ * These tests are for validating expected behavior of retrying the 
S3AInputStream
+ * read() and read(b, off, len), it tests that the read should reopen the 
input stream and retry
+ * the read when IOException is thrown during the read process.
+ */
+public class TestS3AInputStreamRetry extends AbstractS3AMockTest {
+
+  String input = "ab";
+
+  @Test
+  public void testInputStreamReadRetryForException() throws IOException {
+S3AInputStream s3AInputStream = getMockedS3AInputStream();
+
+assertEquals("'a' from the test input stream 'ab' should be the first 
character being read",
+input.charAt(0), s3AInputStream.read());
+assertEquals("'b' from the test input stream 'ab' should be the second 
character being read",
+input.charAt(1), s3AInputStream.read());
+  }
+
+  @Test
+  public void testInputStreamReadRetryLengthForException() throws IOException {
+byte[] result = new byte[input.length()];
+S3AInputStream s3AInputStream = getMockedS3AInputStream();
+s3AInputStream.read(result, 0, input.length());
+
+assertArrayEquals("The read result should equals to the test input stream 
content",
+input.getBytes(), result);
+  }
+
+  private S3AInputStream getMockedS3AInputStream() {
+Path path = new Path("test-path");
+String eTag = "test-etag";
+String versionId = "test-version-id";
+String owner = "test-owner";
+
+S3AFileStatus s3AFileStatus = new S3AFileStatus(
+input.length(), 0, path, input.length(), owner, eTag, versionId);
+
+S3ObjectAttributes s3ObjectAttributes = new S3ObjectAttributes(
+fs.getBucket(), path, fs.pathToKey(path), 
fs.getServerSideEncryptionAlgorithm(),
+new EncryptionSecrets().getEncryptionKey(), eTag, versionId, 
input.length());
+
+S3AReadOpContext s3AReadOpContext = fs.createReadContext(s3AFileStatus, 
S3AInputPolicy.Normal,
+ChangeDetectionPolicy.getPolicy(fs.getConf()), 100, NoopSpan.INSTANCE);
+
+return new S3AInputStream(s3AReadOpContext, s3ObjectAttributes, 
getMockedInputStreamCallback());
+  }
+
+  // Get mocked InputStreamCallbacks where we return mocked S3Object
+  private S3AInputStream.InputStreamCallbacks getMockedInputStreamCallback() {
+return new S3AInputStream.InputStreamCallbacks() {
+
+  final S3Object mockedS3Object = getMockedS3Object();
+
+  @Override
+  public S3Object getObject(GetObjectRequest request) {
+// Set s3 client to return mocked s3object with already defined read 
behavior
+return mockedS3Object;
+  }
+
+  @Override
+  public 

[GitHub] [hadoop] steveloughran commented on a change in pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread GitBox


steveloughran commented on a change in pull request #3109:
URL: https://github.com/apache/hadoop/pull/3109#discussion_r657412668



##
File path: 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AInputStreamRetry.java
##
@@ -0,0 +1,167 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import javax.net.ssl.SSLException;
+import java.io.IOException;
+import java.net.SocketException;
+import java.nio.charset.Charset;
+
+import com.amazonaws.services.s3.model.GetObjectRequest;
+import com.amazonaws.services.s3.model.ObjectMetadata;
+import com.amazonaws.services.s3.model.S3Object;
+import com.amazonaws.services.s3.model.S3ObjectInputStream;
+import org.junit.Test;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.s3a.audit.impl.NoopSpan;
+import org.apache.hadoop.fs.s3a.auth.delegation.EncryptionSecrets;
+import org.apache.hadoop.fs.s3a.impl.ChangeDetectionPolicy;
+
+import static java.lang.Math.min;
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+
+/**
+ * Tests S3AInputStream retry behavior on read failure.
+ * These tests are for validating expected behavior of retrying the 
S3AInputStream
+ * read() and read(b, off, len), it tests that the read should reopen the 
input stream and retry
+ * the read when IOException is thrown during the read process.
+ */
+public class TestS3AInputStreamRetry extends AbstractS3AMockTest {
+
+  String input = "ab";
+
+  @Test
+  public void testInputStreamReadRetryForException() throws IOException {
+S3AInputStream s3AInputStream = getMockedS3AInputStream();
+
+assertEquals("'a' from the test input stream 'ab' should be the first 
character being read",
+input.charAt(0), s3AInputStream.read());
+assertEquals("'b' from the test input stream 'ab' should be the second 
character being read",
+input.charAt(1), s3AInputStream.read());
+  }
+
+  @Test
+  public void testInputStreamReadRetryLengthForException() throws IOException {
+byte[] result = new byte[input.length()];
+S3AInputStream s3AInputStream = getMockedS3AInputStream();
+s3AInputStream.read(result, 0, input.length());
+
+assertArrayEquals("The read result should equals to the test input stream 
content",
+input.getBytes(), result);
+  }
+
+  private S3AInputStream getMockedS3AInputStream() {
+Path path = new Path("test-path");
+String eTag = "test-etag";
+String versionId = "test-version-id";
+String owner = "test-owner";
+
+S3AFileStatus s3AFileStatus = new S3AFileStatus(
+input.length(), 0, path, input.length(), owner, eTag, versionId);
+
+S3ObjectAttributes s3ObjectAttributes = new S3ObjectAttributes(
+fs.getBucket(), path, fs.pathToKey(path), 
fs.getServerSideEncryptionAlgorithm(),
+new EncryptionSecrets().getEncryptionKey(), eTag, versionId, 
input.length());
+
+S3AReadOpContext s3AReadOpContext = fs.createReadContext(s3AFileStatus, 
S3AInputPolicy.Normal,
+ChangeDetectionPolicy.getPolicy(fs.getConf()), 100, NoopSpan.INSTANCE);
+
+return new S3AInputStream(s3AReadOpContext, s3ObjectAttributes, 
getMockedInputStreamCallback());
+  }
+
+  // Get mocked InputStreamCallbacks where we return mocked S3Object
+  private S3AInputStream.InputStreamCallbacks getMockedInputStreamCallback() {
+return new S3AInputStream.InputStreamCallbacks() {
+
+  final S3Object mockedS3Object = getMockedS3Object();
+
+  @Override
+  public S3Object getObject(GetObjectRequest request) {
+// Set s3 client to return mocked s3object with already defined read 
behavior
+return mockedS3Object;
+  }
+
+  @Override
+  public GetObjectRequest newGetRequest(String key) {
+return new GetObjectRequest(fs.getBucket(), key);
+  }
+
+  @Override
+  public void close() {
+  }
+};
+  }
+
+  // Get mocked S3Object where return bad input stream on the first couple of 
getObjectContent calls
+  private S3Object getMockedS3Object() {
+S3ObjectInputStream objectInputStreamBad1 = getMockedInputStream(true);
+S3ObjectInputStream 

[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=614187=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614187
 ]

ASF GitHub Bot logged work on HADOOP-17764:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 19:41
Start Date: 23/Jun/21 19:41
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3109:
URL: https://github.com/apache/hadoop/pull/3109#issuecomment-867108247


   Thanks for the detials.
   I agree, these are all unrelated. Some of them we've seen before and I'd say 
"you are distant from your S3 bucket/slow network/overloaded laptop". There's a 
couple of new ones though, both with hints of security/permissions.
   
   > 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest#testDistCpWithIterator
   > org.junit.runners.model.TestTimedOutException: test timed out after 
180 milliseconds
   
   
   probably a variant on 
(https://issues.apache.org/jira/browse/HADOOP-17628)[https://issues.apache.org/jira/browse/HADOOP-17628]:
 we need to make the test directory tree smaller. it'd make the test faster for 
all too. Patches welcome :)
   
   
   > 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest#testUnbufferOnClosedFile
   > java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient Expected :1024 Actual :605
   
   you aren't alone here; its read() returning an undeful buffer. We can't 
switch to readFully() as the test really wants to call read(). Ignore it. 
Happens when I use many threads in parallel runs. 
   
   > org.apache.hadoop.fs.contract.s3a.ITestS3AContractUnbuffer
   > java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient Expected :1024 Actual :605
   
   same transient; ignore
   
   > org.apache.hadoop.fs.s3a.ITestS3AInconsistency#testGetFileStatus
   > java.lang.AssertionError: getFileStatus should fail due to delayed 
visibility.
   
   Looks like you are seeing https://issues.apache.org/jira/browse/HADOOP-17457
   Given S3 is now consistent, I'd fix this by removing the entire test suite :)
   
   
   ```
   org.apache.hadoop.fs.s3a.tools.ITestMarkerTool
   java.nio.file.AccessDeniedException: : listObjects: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; 
   ```
   
   This is new. Can you file a JIRA with the stack trace, just so we have a 
history of it. 
   MarkerTool should just be trying to call listObjects under a path in the 
test dir. 
   
   ```
   org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   java.nio.file.AccessDeniedException: 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: getFileStatus on 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: A1Y4D90WW452Q8A9; 
S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=; 
Proxy: null), S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=:403
 Forbidden
   ```
   
   This is *very* new, which makes it interesting. If you are seeing this, it 
means it may surface in the wild. I suspect it's because you've got an IAM 
permission set up blocking access to this (public) dataset.
   
   Can you file a JIRA with this too? I'll probably give you some tasks to find 
out more about the cause, but at least there'll be an indexed reference to the 
issue.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614187)
Time Spent: 5h 50m  (was: 5h 40m)

> S3AInputStream read does not re-open the input stream on the second read 
> retry attempt
> --
>
> Key: HADOOP-17764
> URL: https://issues.apache.org/jira/browse/HADOOP-17764
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Zamil Majdy
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>
> *Bug description:*
> The read method in S3AInputStream has this following behaviour when an 
> IOException happening during the read:
>  * {{reopen and read quickly}}: The client after failing in the first attempt 
> of {{read}}, will reopen the stream and try reading again without {{sleep}}.
>  * {{reopen and 

[GitHub] [hadoop] steveloughran commented on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread GitBox


steveloughran commented on pull request #3109:
URL: https://github.com/apache/hadoop/pull/3109#issuecomment-867108247


   Thanks for the detials.
   I agree, these are all unrelated. Some of them we've seen before and I'd say 
"you are distant from your S3 bucket/slow network/overloaded laptop". There's a 
couple of new ones though, both with hints of security/permissions.
   
   > 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest#testDistCpWithIterator
   > org.junit.runners.model.TestTimedOutException: test timed out after 
180 milliseconds
   
   
   probably a variant on 
(https://issues.apache.org/jira/browse/HADOOP-17628)[https://issues.apache.org/jira/browse/HADOOP-17628]:
 we need to make the test directory tree smaller. it'd make the test faster for 
all too. Patches welcome :)
   
   
   > 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest#testUnbufferOnClosedFile
   > java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient Expected :1024 Actual :605
   
   you aren't alone here; its read() returning an undeful buffer. We can't 
switch to readFully() as the test really wants to call read(). Ignore it. 
Happens when I use many threads in parallel runs. 
   
   > org.apache.hadoop.fs.contract.s3a.ITestS3AContractUnbuffer
   > java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient Expected :1024 Actual :605
   
   same transient; ignore
   
   > org.apache.hadoop.fs.s3a.ITestS3AInconsistency#testGetFileStatus
   > java.lang.AssertionError: getFileStatus should fail due to delayed 
visibility.
   
   Looks like you are seeing https://issues.apache.org/jira/browse/HADOOP-17457
   Given S3 is now consistent, I'd fix this by removing the entire test suite :)
   
   
   ```
   org.apache.hadoop.fs.s3a.tools.ITestMarkerTool
   java.nio.file.AccessDeniedException: : listObjects: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; 
   ```
   
   This is new. Can you file a JIRA with the stack trace, just so we have a 
history of it. 
   MarkerTool should just be trying to call listObjects under a path in the 
test dir. 
   
   ```
   org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   java.nio.file.AccessDeniedException: 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: getFileStatus on 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: A1Y4D90WW452Q8A9; 
S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=; 
Proxy: null), S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=:403
 Forbidden
   ```
   
   This is *very* new, which makes it interesting. If you are seeing this, it 
means it may surface in the wild. I suspect it's because you've got an IAM 
permission set up blocking access to this (public) dataset.
   
   Can you file a JIRA with this too? I'll probably give you some tasks to find 
out more about the cause, but at least there'll be an indexed reference to the 
issue.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17767) ABFS: Improve test scripts

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17767?focusedWorklogId=614183=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614183
 ]

ASF GitHub Bot logged work on HADOOP-17767:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 19:17
Start Date: 23/Jun/21 19:17
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3124:
URL: https://github.com/apache/hadoop/pull/3124#issuecomment-867094739


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  The patch generated 0 new 
+ 0 unchanged - 18 fixed = 0 total (was 18)  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  71m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3124/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3124 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs markdownlint compile javac javadoc mvninstall shadedclient spotbugs 
checkstyle xml |
   | uname | Linux b98a85f5a3b6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b5b86f432470d31ada069b45f68e33cef3ff7211 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3124: HADOOP-17767. ABFS: Updates test scripts

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3124:
URL: https://github.com/apache/hadoop/pull/3124#issuecomment-867094739


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  14m 14s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  The patch generated 0 new 
+ 0 unchanged - 18 fixed = 0 total (was 18)  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  71m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3124/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3124 |
   | Optional Tests | dupname asflicense mvnsite unit codespell shellcheck 
shelldocs markdownlint compile javac javadoc mvninstall shadedclient spotbugs 
checkstyle xml |
   | uname | Linux b98a85f5a3b6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b5b86f432470d31ada069b45f68e33cef3ff7211 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3124/4/testReport/ |
   | Max. process+thread count | 674 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-867078722


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/artifact/out/blanks-eol.txt)
 |  The patch has 1 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  4s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 994fb767e1cf 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ac0a6104c97d948f004cac78bea24b8dba36b605 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/testReport/ |
   | Max. process+thread count | 720 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3135:
URL: https://github.com/apache/hadoop/pull/3135#issuecomment-867077320


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   8m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 49s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   8m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   7m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   7m 51s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/1/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   1m 40s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 33 new + 215 unchanged 
- 0 fixed = 248 total (was 215)  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m  8s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 47s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 125m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3135 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 52207182bb29 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c10cb6983e71616fde495731d2fed419f23fc83d |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3135/1/testReport/ |
   | Max. process+thread count | 763 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] steveloughran commented on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


steveloughran commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-867056184


   Latest version does let you switch to the region resolution process if you 
really want to; this actually lets me do a test by setting sysprops to verify 
that the region is picked up that way.
   
   Also the SDK exceptions are being converted to IOEs. 
   
   Tested s3 london, `-Dparallel-tests -DtestsThreadCount=7 -Dmarkers=delete 
-Dscale`; all good.
   
   I just realised that I'd set the fs.s3a.endpoint property though; I'll have 
to rerun without any endpoint or region set for the test bucket.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


hadoop-yetus removed a comment on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-866939056


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 2 unchanged - 0 fixed 
= 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 2e43df8d71e4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a6c0f100b372b3eda775c2b23c0e0bb0a29cf5 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/testReport/ |
   | Max. process+thread count | 722 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/console |
   | versions | git=2.25.1 maven=3.6.3 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3133: S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread GitBox


hadoop-yetus removed a comment on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-865993418


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 39s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 37s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m  1s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  2s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 26s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e50f6390e4ad 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 474d2a9696ad2427a0425da1f3532ab88c56589b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/1/testReport/ |
   | Max. process+thread count | 714 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (HADOOP-17771) S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17771:

Description: 
If you don't have {{fs.s3a.endpoint}} set and lack a region set in
env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
~/.aws/config
then S3A FS creation fails with  the message
"Unable to find a region via the region provider chain."

This is caused by the move to the AWS S3 client builder API in HADOOP-13551

This is pretty dramatic and no doubt everyone will be asking "why didn't you 
notice this?",


But in fact there are some reasons.
# when running in EC2, all is well. Meaning our big test runs were all happy.
# if a developer has fs.s3a.endpoint set for the test bucket, all is well.
   Those of us who work with buckets in the "regions tend to do this, not least 
because it can save a HEAD request every time an FS is created.
# if you have a region set in ~/.aws/config then all is well

reason #3 is the real surprise and the one which has really caught us out. Even 
my tests against buckets in usw-2 through central didn't fail because of course 
I, like my colleagues, have the AWS cli client installed locally. This was 
sufficient to make the problem go away. It is also why this has been an 
intermittent problem on test clusters outside AWS infra: it really depended on 
the VM/docker image whether things worked or not.

h2. Quick Fix: set {{fs.s3a.endpoint}} to {{s3.amazonaws.com}} 

If you have found this JIRA because you are encountering this problem, you can 
fix it in by explicitly declaring the endpoint in {{core-site.xml}}

{code}

  fs.s3a.endpoint
  s3.amazonaws.com

{code}

For Apache Spark, this can be done in {{spark-defaults.conf}}

{code}
spark.hadoop.fs.s3a.endpoint s3.amazonaws.com
{code}

If you know the exact AWS region your data lives in, set the endpoint to be 
that region's endpoint, and so save an HTTPS request to s3.amazonaws.com every 
time an S3A Filesystem instance is created.



  was:
If you don't have {{fs.s3a.endpoint}} set and lack a region set in
env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
~/.aws/config
then S3A FS creation fails with  the message
"Unable to find a region via the region provider chain."

This is caused by the move to the AWS S3 client builder API in HADOOP-13551

This is pretty dramatic and no doubt everyone will be asking "why didn't you 
notice this?",


But in fact there are some reasons.
# when running in EC2, all is well. Meaning our big test runs were all happy.
# if a developer has fs.s3a.endpoint set for the test bucket, all is well.
   Those of us who work with buckets in the "regions tend to do this, not least 
because it can save a HEAD request every time an FS is created.
# if you have a region set in ~/.aws/config then all is well

reason #3 is the real surprise and the one which has really caught out. Even my 
tests against buckets in usw-2 through central didn't fail because of course I, 
like my colleagues, have the AWS S3 client installed locally. This was 
sufficient to make the problem go away. It is also why this has been an 
intermittent problem on test clusters outside AWS infra: it really depended on 
the VM/docker image whether things worked or not.

h2. Quick Fix: set {{fs.s3a.endpoint}} to {{s3.amazonaws.com}} 

If you have found this JIRA because you are encountering this problem, you can 
fix it in by explicitly declaring the endpoint in {{core-site.xml}}

{code}

  fs.s3a.endpoint
  s3.amazonaws.com

{code}

For Apache Spark, this can be done in {{spark-defaults.conf}}

{code}
spark.hadoop.fs.s3a.endpoint s3.amazonaws.com
{code}

If you know the exact AWS region your data lives in, set the endpoint to be 
that region's endpoint, and so save an HTTPS request to s3.amazonaws.com every 
time an S3A Filesystem instance is created.




> S3AFS creation fails "Unable to find a region via the region provider chain."
> -
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: * fs.s3a.endpoint is unset
> * Host outside EC2
> * without the file ~/.aws/config or without a region set in it
> * without the system property aws.region declaring a region
> * without the environment variable AWS_REGION declaring a region.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> 

[jira] [Updated] (HADOOP-17771) S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17771:

Description: 
If you don't have {{fs.s3a.endpoint}} set and lack a region set in
env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
~/.aws/config
then S3A FS creation fails with  the message
"Unable to find a region via the region provider chain."

This is caused by the move to the AWS S3 client builder API in HADOOP-13551

This is pretty dramatic and no doubt everyone will be asking "why didn't you 
notice this?",


But in fact there are some reasons.
# when running in EC2, all is well. Meaning our big test runs were all happy.
# if a developer has fs.s3a.endpoint set for the test bucket, all is well.
   Those of us who work with buckets in the "regions tend to do this, not least 
because it can save a HEAD request every time an FS is created.
# if you have a region set in ~/.aws/config then all is well

reason #3 is the real surprise and the one which has really caught out. Even my 
tests against buckets in usw-2 through central didn't fail because of course I, 
like my colleagues, have the AWS S3 client installed locally. This was 
sufficient to make the problem go away. It is also why this has been an 
intermittent problem on test clusters outside AWS infra: it really depended on 
the VM/docker image whether things worked or not.

h2. Quick Fix: set {{fs.s3a.endpoint}} to {{s3.amazonaws.com}} 

If you have found this JIRA because you are encountering this problem, you can 
fix it in by explicitly declaring the endpoint in {{core-site.xml}}

{code}

  fs.s3a.endpoint
  s3.amazonaws.com

{code}

For Apache Spark, this can be done in {{spark-defaults.conf}}

{code}
spark.hadoop.fs.s3a.endpoint s3.amazonaws.com
{code}

If you know the exact AWS region your data lives in, set the endpoint to be 
that region's endpoint, and so save an HTTPS request to s3.amazonaws.com every 
time an S3A Filesystem instance is created.



  was:
If you don't have {{fs.s3a.endpoint}} set and lack a region set in
env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
~/.aws/config
then S3A FS creation fails with  the message
"Unable to find a region via the region provider chain."

This is caused by the move to the AWS S3 client builder API in HADOOP-13551

This is pretty dramatic and no doubt everyone will be asking "why didn't you 
notice this?",


But in fact there are some reasons.
# when running in EC2, all is well. Meaning our big test runs were all happy.
# if a developer has fs.s3a.endpoint set for the test bucket, all is well.
   Those of us who work with buckets in the "regions tend to do this, not least 
because it can save a HEAD request every time an FS is created.
# if you have a region set in ~/.aws/config then all is well

reason #3 is the real surprise and the one which has really caught out. Even my 
tests against buckets in usw-2 through central didn't fail because of course I, 
like my colleagues, have the AWS S3 client installed locally. This was 
sufficient to make the problem go away. It is also why this has been an 
intermittent problem on test clusters outside AWS infra: it really depended on 
the VM/docker image whether things worked or not.





> S3AFS creation fails "Unable to find a region via the region provider chain."
> -
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: * fs.s3a.endpoint is unset
> * Host outside EC2
> * without the file ~/.aws/config or without a region set in it
> * without the system property aws.region declaring a region
> * without the environment variable AWS_REGION declaring a region.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD 

[jira] [Commented] (HADOOP-17766) CI for Debian 10

2021-06-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/HADOOP-17766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368349#comment-17368349
 ] 

Íñigo Goiri commented on HADOOP-17766:
--

[~gautham] thanks for the patch.
Merged PR 3129 to trunk.

> CI for Debian 10
> 
>
> Key: HADOOP-17766
> URL: https://issues.apache.org/jira/browse/HADOOP-17766
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Need to setup CI for Debian 10. We need to also ensure it runs only if there 
> are any changes to C++ files. Running it for all the PRs would be redundant.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17766) CI for Debian 10

2021-06-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/HADOOP-17766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Íñigo Goiri resolved HADOOP-17766.
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> CI for Debian 10
> 
>
> Key: HADOOP-17766
> URL: https://issues.apache.org/jira/browse/HADOOP-17766
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> Need to setup CI for Debian 10. We need to also ensure it runs only if there 
> are any changes to C++ files. Running it for all the PRs would be redundant.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17766) CI for Debian 10

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17766?focusedWorklogId=614133=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614133
 ]

ASF GitHub Bot logged work on HADOOP-17766:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 17:02
Start Date: 23/Jun/21 17:02
Worklog Time Spent: 10m 
  Work Description: goiri merged pull request #3129:
URL: https://github.com/apache/hadoop/pull/3129


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614133)
Time Spent: 3h 40m  (was: 3.5h)

> CI for Debian 10
> 
>
> Key: HADOOP-17766
> URL: https://issues.apache.org/jira/browse/HADOOP-17766
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0
>Reporter: Gautham Banasandra
>Assignee: Gautham Banasandra
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> Need to setup CI for Debian 10. We need to also ensure it runs only if there 
> are any changes to C++ files. Running it for all the PRs would be redundant.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #3129: HADOOP-17766. CI for Debian 10

2021-06-23 Thread GitBox


goiri merged pull request #3129:
URL: https://github.com/apache/hadoop/pull/3129


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] akshatb1 opened a new pull request #3135: YARN-10829. Support getApplications API in FederationClientInterceptor

2021-06-23 Thread GitBox


akshatb1 opened a new pull request #3135:
URL: https://github.com/apache/hadoop/pull/3135


   Implementing getApplications API in FederationClientInterceptors
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=614106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614106
 ]

ASF GitHub Bot logged work on HADOOP-17764:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 16:24
Start Date: 23/Jun/21 16:24
Worklog Time Spent: 10m 
  Work Description: majdyz edited a comment on pull request #3109:
URL: https://github.com/apache/hadoop/pull/3109#issuecomment-866982708


   Here are the failing tests:
   
   - 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest#testDistCpWithIterator
   `org.junit.runners.model.TestTimedOutException: test timed out after 180 
milliseconds`
   - 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest#testUnbufferOnClosedFile
   `java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient 
   Expected :1024
   Actual   :605`
   - org.apache.hadoop.fs.s3a.ITestS3AInconsistency#testGetFileStatus
   `java.lang.AssertionError: getFileStatus should fail due to delayed 
visibility.`
   - org.apache.hadoop.fs.contract.s3a.ITestS3AContractUnbuffer
   `java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient 
   Expected :1024
   Actual   :605`
   - org.apache.hadoop.fs.s3a.tools.ITestMarkerTool
   `java.nio.file.AccessDeniedException: : listObjects: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
FN0SQ82F85TGTZPW; S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=; 
Proxy: null), S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=:AccessDenied`
   - org.apache.hadoop.fs.s3a.select.ITestS3SelectMRJob
   - org.apache.hadoop.fs.s3a.statistics.ITestAWSStatisticCollection
   - org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   `java.nio.file.AccessDeniedException: 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: getFileStatus on 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: A1Y4D90WW452Q8A9; 
S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=; 
Proxy: null), S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=:403
 Forbidden`
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614106)
Time Spent: 5h 40m  (was: 5.5h)

> S3AInputStream read does not re-open the input stream on the second read 
> retry attempt
> --
>
> Key: HADOOP-17764
> URL: https://issues.apache.org/jira/browse/HADOOP-17764
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Zamil Majdy
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5h 40m
>  Remaining Estimate: 0h
>
> *Bug description:*
> The read method in S3AInputStream has this following behaviour when an 
> IOException happening during the read:
>  * {{reopen and read quickly}}: The client after failing in the first attempt 
> of {{read}}, will reopen the stream and try reading again without {{sleep}}.
>  * {{reopen and wait for fixed duration}}: The client after failing in the 
> attempt of {{read}}, will reopen the stream, sleep for 
> {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try 
> reading from the stream.
> While doing the {{reopen and read quickly}} process, the subsequent read will 
> be retried without reopening the input stream in case of the second failure 
> happened. This leads to some of the bytes read being skipped which results to 
> corrupt/less data than required. 
>  
> *Scenario to reproduce:*
>  * Execute S3AInputStream `read()` or `read(b, off, len)`.
>  * The read failed and throws `Connection Reset` exception after reading some 
> data.
>  * The InputStream is re-opened and another `read()` or `read(b, off, len)` 
> is executed
>  * The read failed for the second time and throws `Connection Reset` 
> exception after reading some data.
>  * The InputStream is not re-opened and another `read()` or `read(b, off, 
> len)` is executed after sleep
>  * The read succeed, but it skips the first few bytes that has already been 
> read on the second failure.

[GitHub] [hadoop] majdyz edited a comment on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread GitBox


majdyz edited a comment on pull request #3109:
URL: https://github.com/apache/hadoop/pull/3109#issuecomment-866982708


   Here are the failing tests:
   
   - 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest#testDistCpWithIterator
   `org.junit.runners.model.TestTimedOutException: test timed out after 180 
milliseconds`
   - 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest#testUnbufferOnClosedFile
   `java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient 
   Expected :1024
   Actual   :605`
   - org.apache.hadoop.fs.s3a.ITestS3AInconsistency#testGetFileStatus
   `java.lang.AssertionError: getFileStatus should fail due to delayed 
visibility.`
   - org.apache.hadoop.fs.contract.s3a.ITestS3AContractUnbuffer
   `java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient 
   Expected :1024
   Actual   :605`
   - org.apache.hadoop.fs.s3a.tools.ITestMarkerTool
   `java.nio.file.AccessDeniedException: : listObjects: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
FN0SQ82F85TGTZPW; S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=; 
Proxy: null), S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=:AccessDenied`
   - org.apache.hadoop.fs.s3a.select.ITestS3SelectMRJob
   - org.apache.hadoop.fs.s3a.statistics.ITestAWSStatisticCollection
   - org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   `java.nio.file.AccessDeniedException: 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: getFileStatus on 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: A1Y4D90WW452Q8A9; 
S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=; 
Proxy: null), S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=:403
 Forbidden`
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17764) S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17764?focusedWorklogId=614105=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614105
 ]

ASF GitHub Bot logged work on HADOOP-17764:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 16:23
Start Date: 23/Jun/21 16:23
Worklog Time Spent: 10m 
  Work Description: majdyz commented on pull request #3109:
URL: https://github.com/apache/hadoop/pull/3109#issuecomment-866982708


   Here are the failing tests:
   
   - 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest#testDistCpWithIterator
   `org.junit.runners.model.TestTimedOutException: test timed out after 180 
milliseconds`
   - 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest#testUnbufferOnClosedFile
   `java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient 
   Expected :1024
   Actual   :605`
   - org.apache.hadoop.fs.s3a.ITestS3AInconsistency#testGetFileStatus
   `java.lang.AssertionError: getFileStatus should fail due to delayed 
visibility.`
   - org.apache.hadoop.fs.s3a.tools.ITestMarkerTool
   `java.nio.file.AccessDeniedException: : listObjects: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
FN0SQ82F85TGTZPW; S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=; 
Proxy: null), S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=:AccessDenied`
   - org.apache.hadoop.fs.s3a.select.ITestS3SelectMRJob
   - org.apache.hadoop.fs.s3a.statistics.ITestAWSStatisticCollection
   - org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   `java.nio.file.AccessDeniedException: 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: getFileStatus on 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: A1Y4D90WW452Q8A9; 
S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=; 
Proxy: null), S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=:403
 Forbidden`
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614105)
Time Spent: 5.5h  (was: 5h 20m)

> S3AInputStream read does not re-open the input stream on the second read 
> retry attempt
> --
>
> Key: HADOOP-17764
> URL: https://issues.apache.org/jira/browse/HADOOP-17764
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Zamil Majdy
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> *Bug description:*
> The read method in S3AInputStream has this following behaviour when an 
> IOException happening during the read:
>  * {{reopen and read quickly}}: The client after failing in the first attempt 
> of {{read}}, will reopen the stream and try reading again without {{sleep}}.
>  * {{reopen and wait for fixed duration}}: The client after failing in the 
> attempt of {{read}}, will reopen the stream, sleep for 
> {{fs.s3a.retry.interval}} milliseconds (defaults to 500 ms), and then try 
> reading from the stream.
> While doing the {{reopen and read quickly}} process, the subsequent read will 
> be retried without reopening the input stream in case of the second failure 
> happened. This leads to some of the bytes read being skipped which results to 
> corrupt/less data than required. 
>  
> *Scenario to reproduce:*
>  * Execute S3AInputStream `read()` or `read(b, off, len)`.
>  * The read failed and throws `Connection Reset` exception after reading some 
> data.
>  * The InputStream is re-opened and another `read()` or `read(b, off, len)` 
> is executed
>  * The read failed for the second time and throws `Connection Reset` 
> exception after reading some data.
>  * The InputStream is not re-opened and another `read()` or `read(b, off, 
> len)` is executed after sleep
>  * The read succeed, but it skips the first few bytes that has already been 
> read on the second failure.
>  
> *Proposed fix:*
> [https://github.com/apache/hadoop/pull/3109]
> Added the test that reproduces the issue along with the fix



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hadoop] majdyz commented on pull request #3109: HADOOP-17764. S3AInputStream read does not re-open the input stream on the second read retry attempt

2021-06-23 Thread GitBox


majdyz commented on pull request #3109:
URL: https://github.com/apache/hadoop/pull/3109#issuecomment-866982708


   Here are the failing tests:
   
   - 
org.apache.hadoop.tools.contract.AbstractContractDistCpTest#testDistCpWithIterator
   `org.junit.runners.model.TestTimedOutException: test timed out after 180 
milliseconds`
   - 
org.apache.hadoop.fs.contract.AbstractContractUnbufferTest#testUnbufferOnClosedFile
   `java.lang.AssertionError: failed to read expected number of bytes from 
stream. This may be transient 
   Expected :1024
   Actual   :605`
   - org.apache.hadoop.fs.s3a.ITestS3AInconsistency#testGetFileStatus
   `java.lang.AssertionError: getFileStatus should fail due to delayed 
visibility.`
   - org.apache.hadoop.fs.s3a.tools.ITestMarkerTool
   `java.nio.file.AccessDeniedException: : listObjects: 
com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: 
Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: 
FN0SQ82F85TGTZPW; S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=; 
Proxy: null), S3 Extended Request ID: 
j1bhdYzzKkMVQqSgPDEPW7QQXMkVE+WJeLKP81l/qs7uF0RVx1xcUk2r6Wri4NQFlt/XE9W+FBo=:AccessDenied`
   - org.apache.hadoop.fs.s3a.select.ITestS3SelectMRJob
   - org.apache.hadoop.fs.s3a.statistics.ITestAWSStatisticCollection
   - org.apache.hadoop.fs.s3a.auth.delegation.ITestDelegatedMRJob
   `java.nio.file.AccessDeniedException: 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: getFileStatus on 
s3a://osm-pds/planet/planet-latest.orc#_partition.lst: 
com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon 
S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: A1Y4D90WW452Q8A9; 
S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=; 
Proxy: null), S3 Extended Request ID: 
b/IV48OeMEgTaxikC9raP+IiHVPve3rIeoVkCymMc5opNp/70Iyc0tY2WZ0zpixFl0w7WT3bBCQ=:403
 Forbidden`
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17772) ABFS: delete() should have timeout option

2021-06-23 Thread Zhuangyu Han (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368301#comment-17368301
 ] 

Zhuangyu Han commented on HADOOP-17772:
---

We are using wasbs:// URL

> ABFS: delete() should have timeout option
> -
>
> Key: HADOOP-17772
> URL: https://issues.apache.org/jira/browse/HADOOP-17772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Zhuangyu Han
>Priority: Major
>
> The delete() API in AzureBlobFileSystem could potentially stuck when trying 
> to delete a infinitely lease blob file/directory. We hope that there is a 
> timeout option for this API and the delete() could throw an timeoutException 
> when specified timeout limit is reached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17771) S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?focusedWorklogId=614071=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614071
 ]

ASF GitHub Bot logged work on HADOOP-17771:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 15:30
Start Date: 23/Jun/21 15:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-866939056


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 2 unchanged - 0 fixed 
= 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 2e43df8d71e4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a6c0f100b372b3eda775c2b23c0e0bb0a29cf5 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3133: HADOOP-17771. S3AFS creation fails without region set in ~/.aws/config.

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-866939056


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   0m 21s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 1 new + 2 unchanged - 0 fixed 
= 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  75m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3133 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 2e43df8d71e4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 88a6c0f100b372b3eda775c2b23c0e0bb0a29cf5 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/testReport/ |
   | Max. process+thread count | 722 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3133/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
 

[jira] [Commented] (HADOOP-16128) Some S3A tests leak filesystem instances

2021-06-23 Thread Amit Chavan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368277#comment-17368277
 ] 

Amit  Chavan commented on HADOOP-16128:
---

[~ste...@apache.org]  [~bogthe]  If no one is working on this can I pick this 
up? I am new to the project and would like to start with small tasks to help me 
ramp up

> Some S3A tests leak filesystem instances
> 
>
> Key: HADOOP-16128
> URL: https://issues.apache.org/jira/browse/HADOOP-16128
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.1.2
>Reporter: Steve Loughran
>Priority: Minor
>
> There's a few S3a ITests which call filesystem.newInstance() but which don't 
> clean up after by closing it. This leaks instances, threadpools, etc.
> * ITestS3AAWSCredentialsProvider.testAnonymousProvider()
> * ITestS3GuardWriteBack



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #3086: HDFS-16039. RBF: Some indicators of RBFMetrics count inaccurately

2021-06-23 Thread GitBox


goiri commented on a change in pull request #3086:
URL: https://github.com/apache/hadoop/pull/3086#discussion_r657203156



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestRBFMetrics.java
##
@@ -382,4 +366,56 @@ private void testCapacity(FederationMBean bean) throws 
IOException {
 assertNotEquals(availableCapacity,
 BigInteger.valueOf(bean.getRemainingCapacity()));
   }
+
+  @Test
+  public void testDatanodeNumMetrics()
+  throws Exception {
+Configuration routerConf = new RouterConfigBuilder()
+.metrics()
+.http()
+.stateStore()
+.rpc()
+.build();
+MiniRouterDFSCluster cluster = new MiniRouterDFSCluster(false, 1);
+cluster.setNumDatanodesPerNameservice(0);
+cluster.addNamenodeOverrides(routerConf);
+cluster.startCluster();
+routerConf.setTimeDuration(
+RBFConfigKeys.DN_REPORT_CACHE_EXPIRE, 1, TimeUnit.SECONDS);
+cluster.addRouterOverrides(routerConf);
+cluster.startRouters();
+Router router = cluster.getRandomRouter().getRouter();
+// Register and verify all NNs with all routers
+cluster.registerNamenodes();
+cluster.waitNamenodeRegistration();
+RouterRpcServer rpcServer = router.getRpcServer();
+RBFMetrics rbfMetrics = router.getMetrics();
+// Create mock dn
+DatanodeInfo[] dNInfo = new DatanodeInfo[4];
+DatanodeInfo datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.setDecommissioned();
+dNInfo[0] = datanodeInfo;
+datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.setInMaintenance();
+dNInfo[1] = datanodeInfo;
+datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.startMaintenance();
+dNInfo[2] = datanodeInfo;
+datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.startDecommission();
+dNInfo[3] = datanodeInfo;
+
+rpcServer.getDnCache().put(HdfsConstants.DatanodeReportType.LIVE, dNInfo);

Review comment:
   This is a little unconventional.
   You should mark the getter as VisibleForTesting.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/metrics/RBFMetrics.java
##
@@ -164,13 +163,13 @@ public RBFMetrics(Router router) throws IOException {
   RouterStore.class);
 }
 
-// Initialize the cache for the DN reports
 Configuration conf = router.getConfig();
-this.timeOut = conf.getTimeDuration(RBFConfigKeys.DN_REPORT_TIME_OUT,
-RBFConfigKeys.DN_REPORT_TIME_OUT_MS_DEFAULT, TimeUnit.MILLISECONDS);
 this.topTokenRealOwners = conf.getInt(
 RBFConfigKeys.DFS_ROUTER_METRICS_TOP_NUM_TOKEN_OWNERS_KEY,
 RBFConfigKeys.DFS_ROUTER_METRICS_TOP_NUM_TOKEN_OWNERS_KEY_DEFAULT);
+
+// Use RpcServer dnCache
+this.dnCache = this.router.getRpcServer().getDnCache();

Review comment:
   No much benefit getting and setting into an attribute.
   We can do this get the times we need to access.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1757,7 +1757,7 @@ public void testRBFMetricsMethodsRelayOnStateStore() {
 // These methods relays on
 // {@link RBFMetrics#getActiveNamenodeRegistration()}
 assertEquals("{}", metrics.getNameservices());
-assertEquals(0, metrics.getNumLiveNodes());
+assertEquals(NUM_DNS * 2, metrics.getNumLiveNodes());

Review comment:
   Why now this is like this?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3134: HDFS-16085. Move the getPermissionChecker out of the read lock

2021-06-23 Thread GitBox


tomscut commented on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-866913600


   > +1
   
   Thanks @ayushtkn for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?focusedWorklogId=614047=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614047
 ]

ASF GitHub Bot logged work on HADOOP-17769:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 14:37
Start Date: 23/Jun/21 14:37
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3130:
URL: https://github.com/apache/hadoop/pull/3130#issuecomment-866895116


   Thanx for the update. Will push once the build completes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614047)
Time Spent: 2h 50m  (was: 2h 40m)

> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #3130: HADOOP-17769. Upgrade JUnit to 4.13.2. fixes TestBlockRecovery

2021-06-23 Thread GitBox


ayushtkn commented on pull request #3130:
URL: https://github.com/apache/hadoop/pull/3130#issuecomment-866895116


   Thanx for the update. Will push once the build completes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?focusedWorklogId=614045=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614045
 ]

ASF GitHub Bot logged work on HADOOP-17769:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 14:36
Start Date: 23/Jun/21 14:36
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3131:
URL: https://github.com/apache/hadoop/pull/3131#issuecomment-866893518


   Merged. Thanx Everyone!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614045)
Time Spent: 2h 40m  (was: 2.5h)

> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?focusedWorklogId=614044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614044
 ]

ASF GitHub Bot logged work on HADOOP-17769:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 14:35
Start Date: 23/Jun/21 14:35
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #3131:
URL: https://github.com/apache/hadoop/pull/3131


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614044)
Time Spent: 2.5h  (was: 2h 20m)

> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #3131: HADOOP-17769. Upgrade JUnit to 4.13.2. fixes TestBlockRecovery

2021-06-23 Thread GitBox


ayushtkn commented on pull request #3131:
URL: https://github.com/apache/hadoop/pull/3131#issuecomment-866893518


   Merged. Thanx Everyone!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn merged pull request #3131: HADOOP-17769. Upgrade JUnit to 4.13.2. fixes TestBlockRecovery

2021-06-23 Thread GitBox


ayushtkn merged pull request #3131:
URL: https://github.com/apache/hadoop/pull/3131


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?focusedWorklogId=614041=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614041
 ]

ASF GitHub Bot logged work on HADOOP-17769:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 14:12
Start Date: 23/Jun/21 14:12
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3131:
URL: https://github.com/apache/hadoop/pull/3131#issuecomment-866871531


   Thanks @jojochuang , @aajisaka , @ayushtkn and @ferhui for the reviews and 
feedback.
   This PR is ready to be merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614041)
Time Spent: 2h 20m  (was: 2h 10m)

> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #3131: HADOOP-17769. Upgrade JUnit to 4.13.2. fixes TestBlockRecovery

2021-06-23 Thread GitBox


amahussein commented on pull request #3131:
URL: https://github.com/apache/hadoop/pull/3131#issuecomment-866871531


   Thanks @jojochuang , @aajisaka , @ayushtkn and @ferhui for the reviews and 
feedback.
   This PR is ready to be merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17769) Upgrade JUnit to 4.13.2

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17769?focusedWorklogId=614040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614040
 ]

ASF GitHub Bot logged work on HADOOP-17769:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 14:03
Start Date: 23/Jun/21 14:03
Worklog Time Spent: 10m 
  Work Description: amahussein commented on pull request #3130:
URL: https://github.com/apache/hadoop/pull/3130#issuecomment-866864214


   > @amahussein can you update this file, as suggested on 2.10 PR
   > 
https://github.com/apache/hadoop/blob/10b79a26fe0677b266acf237e8458e93743424a6/LICENSE-binary#L505
   
   Thanks @ayushtkn and @jojochuang for the feedback.
   I pushed another commit fixing the LICENSE-binary. Apparently, I missed it 
because I was searching for 4.13.1 while it was 4.12 in the LICENSE-binary .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614040)
Time Spent: 2h 10m  (was: 2h)

> Upgrade JUnit to 4.13.2
> ---
>
> Key: HADOOP-17769
> URL: https://issues.apache.org/jira/browse/HADOOP-17769
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0, 2.10.2, 3.2.3
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> JUnit 4.13.1 has a bug that is reported in Junit 
> [issue-1652|https://github.com/junit-team/junit4/issues/1652] _Timeout 
> ThreadGroups should not be destroyed_
> After upgrading Junit to 4.13.1 in HADOOP-17602, {{TestBlockRecovery}}  
> started to fail regularly in branch-3.x and branch-2.10.
> While investigating the failure in branch-2.10 HDFS-16072, I found out that 
> the bug is the main reason {{TestBlockRecovery}}  started to fail because the 
> timeout of the Junit would try to close a ThreadGroup that has been already 
> closed which throws the {{java.lang.IllegalThreadStateException}}.
> The bug has been fixed in Junit-4.13.2
> For branch-3.x, HDFS-15940 did not address the root cause of the problem. 
> Eventually, Splitting the {{TestBlockRecovery}} hid the bug, but the upgrade 
> needs to be done so that the problem does not show up in another unit test.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein commented on pull request #3130: HADOOP-17769. Upgrade JUnit to 4.13.2. fixes TestBlockRecovery

2021-06-23 Thread GitBox


amahussein commented on pull request #3130:
URL: https://github.com/apache/hadoop/pull/3130#issuecomment-866864214


   > @amahussein can you update this file, as suggested on 2.10 PR
   > 
https://github.com/apache/hadoop/blob/10b79a26fe0677b266acf237e8458e93743424a6/LICENSE-binary#L505
   
   Thanks @ayushtkn and @jojochuang for the feedback.
   I pushed another commit fixing the LICENSE-binary. Apparently, I missed it 
because I was searching for 4.13.1 while it was 4.12 in the LICENSE-binary .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-16963) HADOOP-16582 changed mkdirs() behavior

2021-06-23 Thread David Mollitor (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Mollitor updated HADOOP-16963:

Comment: was deleted

(was: Can you please provide (and link) the corresponding Hive Jira?)

> HADOOP-16582 changed mkdirs() behavior
> --
>
> Key: HADOOP-16963
> URL: https://issues.apache.org/jira/browse/HADOOP-16963
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> HADOOP-16582 changed behavior of {{mkdirs()}}
> Some Hive tests depend on the old behavior and they fail miserably.
> {quote}
> earlier:
> all plain mkdirs(somePath) were fast-tracked to FileSystem.mkdirs which have 
> rerouted them to mkdirs(somePath, somePerm) method with some defaults (which 
> were static)
> an implementation of FileSystem have only needed implement "mkdirs(somePath, 
> somePerm)" - because the other was not neccessarily called if it was always 
> in a FilterFileSystem or something like that
> now:
> especially FilterFileSystem forwards the call of mkdirs(p) to the actual fs 
> implementation...which may skip overriden mkdirs(somPath,somePerm) methods
> ...and could cause issues for existing FileSystem implementations
> {quote}
> File this jira to address this problem.
> [~kgyrtkirk] [~ste...@apache.org] [~kihwal]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16963) HADOOP-16582 changed mkdirs() behavior

2021-06-23 Thread David Mollitor (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368118#comment-17368118
 ] 

David Mollitor commented on HADOOP-16963:
-

Can you please provide (and link) the corresponding Hive Jira?

> HADOOP-16582 changed mkdirs() behavior
> --
>
> Key: HADOOP-16963
> URL: https://issues.apache.org/jira/browse/HADOOP-16963
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> HADOOP-16582 changed behavior of {{mkdirs()}}
> Some Hive tests depend on the old behavior and they fail miserably.
> {quote}
> earlier:
> all plain mkdirs(somePath) were fast-tracked to FileSystem.mkdirs which have 
> rerouted them to mkdirs(somePath, somePerm) method with some defaults (which 
> were static)
> an implementation of FileSystem have only needed implement "mkdirs(somePath, 
> somePerm)" - because the other was not neccessarily called if it was always 
> in a FilterFileSystem or something like that
> now:
> especially FilterFileSystem forwards the call of mkdirs(p) to the actual fs 
> implementation...which may skip overriden mkdirs(somPath,somePerm) methods
> ...and could cause issues for existing FileSystem implementations
> {quote}
> File this jira to address this problem.
> [~kgyrtkirk] [~ste...@apache.org] [~kihwal]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17771) S3AFS creation fails

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17771:

Summary: S3AFS creation fails  (was: S3AFS creation fails without region 
set in ~/.aws/config)

> S3AFS creation fails
> 
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: * fs.s3a.endpoint is unset
> * Host outside EC2
> * without the file ~/.aws/config or without a region set in it
> * without the system property aws.region declaring a region
> * without the environment variable AWS_REGION declaring a region.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD request every time an FS is created.
> # if you have a region set in ~/.aws/config then all is well
> reason #3 is the real surprise and the one which has really caught out. Even 
> my tests against buckets in usw-2 through central didn't fail because of 
> course I, like my colleagues, have the AWS S3 client installed locally. This 
> was sufficient to make the problem go away. It is also why this has been an 
> intermittent problem on test clusters outside AWS infra: it really depended 
> on the VM/docker image whether things worked or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17771) S3AFS creation fails "Unable to find a region via the region provider chain."

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17771:

Summary: S3AFS creation fails "Unable to find a region via the region 
provider chain."  (was: S3AFS creation fails)

> S3AFS creation fails "Unable to find a region via the region provider chain."
> -
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: * fs.s3a.endpoint is unset
> * Host outside EC2
> * without the file ~/.aws/config or without a region set in it
> * without the system property aws.region declaring a region
> * without the environment variable AWS_REGION declaring a region.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD request every time an FS is created.
> # if you have a region set in ~/.aws/config then all is well
> reason #3 is the real surprise and the one which has really caught out. Even 
> my tests against buckets in usw-2 through central didn't fail because of 
> course I, like my colleagues, have the AWS S3 client installed locally. This 
> was sufficient to make the problem go away. It is also why this has been an 
> intermittent problem on test clusters outside AWS infra: it really depended 
> on the VM/docker image whether things worked or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17771) S3AFS creation fails without region set in ~/.aws/config

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17771:

Environment: 
* fs.s3a.endpoint is unset
* Host outside EC2
* without the file ~/.aws/config or without a region set in it
* without the system property aws.region declaring a region
* without the environment variable AWS_REGION declaring a region.


  was:
* Host outside EC2
* without the file ~/.aws/config or without a region set in it
* without the system property aws.region declaring a region
* without the environment variable AWS_REGION declaring a region.



> S3AFS creation fails without region set in ~/.aws/config
> 
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: * fs.s3a.endpoint is unset
> * Host outside EC2
> * without the file ~/.aws/config or without a region set in it
> * without the system property aws.region declaring a region
> * without the environment variable AWS_REGION declaring a region.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD request every time an FS is created.
> # if you have a region set in ~/.aws/config then all is well
> reason #3 is the real surprise and the one which has really caught out. Even 
> my tests against buckets in usw-2 through central didn't fail because of 
> course I, like my colleagues, have the AWS S3 client installed locally. This 
> was sufficient to make the problem go away. It is also why this has been an 
> intermittent problem on test clusters outside AWS infra: it really depended 
> on the VM/docker image whether things worked or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17771) S3AFS creation fails without region set in ~/.aws/config

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17771:

Environment: 
* Host outside EC2
* without the file ~/.aws/config or without a region set in it
* without the system property aws.region declaring a region
* without the environment variable AWS_REGION declaring a region.


  was:Host outside EC2 and without the file ~/.aws/config or without a region 
set in it


> S3AFS creation fails without region set in ~/.aws/config
> 
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: * Host outside EC2
> * without the file ~/.aws/config or without a region set in it
> * without the system property aws.region declaring a region
> * without the environment variable AWS_REGION declaring a region.
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD request every time an FS is created.
> # if you have a region set in ~/.aws/config then all is well
> reason #3 is the real surprise and the one which has really caught out. Even 
> my tests against buckets in usw-2 through central didn't fail because of 
> course I, like my colleagues, have the AWS S3 client installed locally. This 
> was sufficient to make the problem go away. It is also why this has been an 
> intermittent problem on test clusters outside AWS infra: it really depended 
> on the VM/docker image whether things worked or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims edited a comment on pull request #3128: YARN-10826. [UI2] Upgrade Node.js to v12.22.1.

2021-06-23 Thread GitBox


iwasakims edited a comment on pull request #3128:
URL: https://github.com/apache/hadoop/pull/3128#issuecomment-866790669


   I will cherry-pick this to branch-3.3 and branch-3.2 after testing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #3128: YARN-10826. [UI2] Upgrade Node.js to v12.22.1.

2021-06-23 Thread GitBox


iwasakims commented on pull request #3128:
URL: https://github.com/apache/hadoop/pull/3128#issuecomment-866790669


   I will backport this to branch-3.3 and branch-3.2 after testing.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17772) ABFS: delete() should have timeout option

2021-06-23 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368078#comment-17368078
 ] 

Steve Loughran commented on HADOOP-17772:
-

abfs is timing out for a deep tree delete in HADOOP-17691; which is its own 
problem.

1. Are you referring to abfs:// or wasb:// URLs?
2. Can you replicate this on hadoop-3.3.1?



> ABFS: delete() should have timeout option
> -
>
> Key: HADOOP-17772
> URL: https://issues.apache.org/jira/browse/HADOOP-17772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Zhuangyu Han
>Priority: Major
>
> The delete() API in AzureBlobFileSystem could potentially stuck when trying 
> to delete a infinitely lease blob file/directory. We hope that there is a 
> timeout option for this API and the delete() could throw an timeoutException 
> when specified timeout limit is reached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17772) ABFS: delete() should have timeout option

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17772:

Parent: HADOOP-17736
Issue Type: Sub-task  (was: Improvement)

> ABFS: delete() should have timeout option
> -
>
> Key: HADOOP-17772
> URL: https://issues.apache.org/jira/browse/HADOOP-17772
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Zhuangyu Han
>Priority: Major
>
> The delete() API in AzureBlobFileSystem could potentially stuck when trying 
> to delete a infinitely lease blob file/directory. We hope that there is a 
> timeout option for this API and the delete() could throw an timeoutException 
> when specified timeout limit is reached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-15763) Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-15763.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

> Über-JIRA: abfs phase II: Hadoop 3.3 features & fixes
> -
>
> Key: HADOOP-15763
> URL: https://issues.apache.org/jira/browse/HADOOP-15763
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Da Zhou
>Priority: Major
> Fix For: 3.3.1
>
>
> ABFS phase II: address issues which surface in the field; tune things which 
> need tuning, add more tests where appropriate. Improve docs, especially 
> troubleshooting. Classpaths. The usual.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16458) LocatedFileStatusFetcher scans failing intermittently against S3 store

2021-06-23 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368077#comment-17368077
 ] 

Steve Loughran commented on HADOOP-16458:
-

hmm. Given the purpose of that change was to make the underlying cause of a 
failure visible, it's hard to feel *too* bad that the visibility of the root 
cause is now a problem. 

The exception being raised is still the same, so the change David has added is 
backwards compatible. The old code contained the assumption that the inner 
cause on InvalidInputException was always null. With the Hive fix the error log 
will now actually contain whatever the underlying cause of that 
InvalidInputException, which should benefit all.

> LocatedFileStatusFetcher scans failing intermittently against S3 store
> --
>
> Key: HADOOP-16458
> URL: https://issues.apache.org/jira/browse/HADOOP-16458
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0
> Environment: S3 + S3Guard
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> Intermittent failure of LocatedFileStatusFetcher.getFileStatuses(), which is 
> using globStatus to find files.
> I'd say "turn s3guard on" except this appears to be the case, and the dataset 
> being read is
> over 1h old.
> Which means it is harder than I'd like to blame S3 for what would sound like 
> an inconsistency
> We're hampered by the number of debug level statements in the globber code 
> being approximately none; there's no debugging to turn on. All we know is 
> that globFiles returns null without any explanation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17774) bytesRead FS statistic showing twice the correct value in S3A

2021-06-23 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17774:

Parent: HADOOP-17566
Issue Type: Sub-task  (was: Bug)

> bytesRead FS statistic showing twice the correct value in S3A
> -
>
> Key: HADOOP-17774
> URL: https://issues.apache.org/jira/browse/HADOOP-17774
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>
> S3A "bytes read" statistic is being incremented twice. Firstly while reading 
> in S3AInputStream and then in merge() of S3AInstrumentation when 
> S3AInputStream is closed.
> This makes "bytes read" statistic equal to sum of stream_read_bytes and 
> stream_read_total_bytes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17290) ABFS: Add Identifiers to Client Request Header

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17290?focusedWorklogId=613963=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613963
 ]

ASF GitHub Bot logged work on HADOOP-17290:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 10:54
Start Date: 23/Jun/21 10:54
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#issuecomment-866736669


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 37 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  19m 25s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 12s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  87m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2520 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux faab5f080aa6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6250d04f25d1efed187a0835e70f53415f0e1378 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/19/testReport/ |
   | Max. process+thread count | 715 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2520: HADOOP-17290. ABFS: Add Identifiers to Client Request Header

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#issuecomment-866736669


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 37 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 44s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  19m 25s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 12s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  87m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/19/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2520 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux faab5f080aa6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6250d04f25d1efed187a0835e70f53415f0e1378 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/19/testReport/ |
   | Max. process+thread count | 715 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2520/19/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to 

[jira] [Work logged] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17749?focusedWorklogId=613951=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613951
 ]

ASF GitHub Bot logged work on HADOOP-17749:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 10:40
Start Date: 23/Jun/21 10:40
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-866728745


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  28m  0s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/6/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  29m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 
7 fixed = 6 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  4s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 196m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3080 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 3b8b7bf7739f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f0c98c29acbd65770681e5e2e4bb1e743967b588 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/6/testReport/ |
   | Max. process+thread count | 1211 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3080: HADOOP-17749. Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#issuecomment-866728745


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  28m  0s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/6/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |  29m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  24m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 54s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 21s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 
7 fixed = 6 total (was 13)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 33s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  4s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 196m  3s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3080 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 3b8b7bf7739f 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f0c98c29acbd65770681e5e2e4bb1e743967b588 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/6/testReport/ |
   | Max. process+thread count | 1211 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3080/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to 

[GitHub] [hadoop] iwasakims commented on pull request #3128: YARN-10826. [UI2] Upgrade Node.js to v12.22.1.

2021-06-23 Thread GitBox


iwasakims commented on pull request #3128:
URL: https://github.com/apache/hadoop/pull/3128#issuecomment-866725556


   Thanks, @aajisaka. I updated the commit message and issue title.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims merged pull request #3128: YARN-10826. [UI2] Upgrade Node.js to at least 12.x.

2021-06-23 Thread GitBox


iwasakims merged pull request #3128:
URL: https://github.com/apache/hadoop/pull/3128


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17774) bytesRead FS statistic showing twice the correct value in S3A

2021-06-23 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17774:


 Summary: bytesRead FS statistic showing twice the correct value in 
S3A
 Key: HADOOP-17774
 URL: https://issues.apache.org/jira/browse/HADOOP-17774
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Mehakmeet Singh
Assignee: Mehakmeet Singh


S3A "bytes read" statistic is being incremented twice. Firstly while reading in 
S3AInputStream and then in merge() of S3AInstrumentation when S3AInputStream is 
closed.

This makes "bytes read" statistic equal to sum of stream_read_bytes and 
stream_read_total_bytes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17771) S3AFS creation fails without region set in ~/.aws/config

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?focusedWorklogId=613930=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613930
 ]

ASF GitHub Bot logged work on HADOOP-17771:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 09:56
Start Date: 23/Jun/21 09:56
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3133:
URL: https://github.com/apache/hadoop/pull/3133#discussion_r656942766



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
##
@@ -141,6 +142,10 @@ protected AmazonS3 buildAmazonS3Client(
   // no idea what the endpoint is, so tell the SDK
   // to work it out at the cost of an extra HEAD request
   b.withForceGlobalBucketAccessEnabled(true);
+  // HADOOP-17771 force set the region so the build process doesn't halt.
+  String region = getConf().getTrimmed(AWS_REGION, AWS_S3_CENTRAL_REGION);
+  LOG.debug("Using default endpoint; setting region to {}", region);
+  b.setRegion(region);

Review comment:
   skip if region is empty?, so a config has set fs.s3a.endpoint=""; if we 
skip calling setRegion() there, then the connector will go to the SDK region 
resolution process, including picking up info from EC2 metadata

##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
##
@@ -1087,4 +1087,11 @@ private Constants() {
*/
   public static final String AWS_REGION = "fs.s3a.endpoint.region";
 
+  /**
+   * The special S3 region which can be used to talk to any bucket if
+   * the global bucket resolution is enabled (which it is...)

Review comment:
   need a .




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 613930)
Time Spent: 1h 40m  (was: 1.5h)

> S3AFS creation fails without region set in ~/.aws/config
> 
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: Host outside EC2 and without the file ~/.aws/config or 
> without a region set in it
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD request every time an FS is created.
> # if you have a region set in ~/.aws/config then all is well
> reason #3 is the real surprise and the one which has really caught out. Even 
> my tests against buckets in usw-2 through central didn't fail because of 
> course I, like my colleagues, have the AWS S3 client installed locally. This 
> was sufficient to make the problem go away. It is also why this has been an 
> intermittent problem on test clusters outside AWS infra: it really depended 
> on the VM/docker image whether things worked or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #3133: HADOOP-17771. S3AFS creation fails without region set in ~/.aws/config.

2021-06-23 Thread GitBox


steveloughran commented on a change in pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#discussion_r656942766



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java
##
@@ -141,6 +142,10 @@ protected AmazonS3 buildAmazonS3Client(
   // no idea what the endpoint is, so tell the SDK
   // to work it out at the cost of an extra HEAD request
   b.withForceGlobalBucketAccessEnabled(true);
+  // HADOOP-17771 force set the region so the build process doesn't halt.
+  String region = getConf().getTrimmed(AWS_REGION, AWS_S3_CENTRAL_REGION);
+  LOG.debug("Using default endpoint; setting region to {}", region);
+  b.setRegion(region);

Review comment:
   skip if region is empty?, so a config has set fs.s3a.endpoint=""; if we 
skip calling setRegion() there, then the connector will go to the SDK region 
resolution process, including picking up info from EC2 metadata

##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java
##
@@ -1087,4 +1087,11 @@ private Constants() {
*/
   public static final String AWS_REGION = "fs.s3a.endpoint.region";
 
+  /**
+   * The special S3 region which can be used to talk to any bucket if
+   * the global bucket resolution is enabled (which it is...)

Review comment:
   need a .




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17771) S3AFS creation fails without region set in ~/.aws/config

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?focusedWorklogId=613923=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613923
 ]

ASF GitHub Bot logged work on HADOOP-17771:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 09:51
Start Date: 23/Jun/21 09:51
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-866696567


   + maybe I could test this by setting the sysprop `aws.region` to something 
invalid. If the region resolution is going through the chain then this would 
get picked up ahead of ~/.config or Ec2 and so, being invalid, fail somehow. 
And if we weren't using that chain, all would be good.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 613923)
Time Spent: 1.5h  (was: 1h 20m)

> S3AFS creation fails without region set in ~/.aws/config
> 
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: Host outside EC2 and without the file ~/.aws/config or 
> without a region set in it
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD request every time an FS is created.
> # if you have a region set in ~/.aws/config then all is well
> reason #3 is the real surprise and the one which has really caught out. Even 
> my tests against buckets in usw-2 through central didn't fail because of 
> course I, like my colleagues, have the AWS S3 client installed locally. This 
> was sufficient to make the problem go away. It is also why this has been an 
> intermittent problem on test clusters outside AWS infra: it really depended 
> on the VM/docker image whether things worked or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3133: HADOOP-17771. S3AFS creation fails without region set in ~/.aws/config.

2021-06-23 Thread GitBox


steveloughran commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-866696567


   + maybe I could test this by setting the sysprop `aws.region` to something 
invalid. If the region resolution is going through the chain then this would 
get picked up ahead of ~/.config or Ec2 and so, being invalid, fail somehow. 
And if we weren't using that chain, all would be good.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17771) S3AFS creation fails without region set in ~/.aws/config

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17771?focusedWorklogId=613921=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613921
 ]

ASF GitHub Bot logged work on HADOOP-17771:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 09:48
Start Date: 23/Jun/21 09:48
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-866694906


   One thought here: would you ever want the s3a connector to fall back to that 
bundled region lookup sequence?
   
   I'm wondering in particular if it makes a difference in routing/billing on 
EC2 deployments? 
   
   As of Hadoop 3.3.1 if region=null, endpoint=null the Ec2 metadata is used to 
provide the region info (this is new). 
   With this patch. if endpoint = null we switch to saying region = us-east-1
   will that do bad things for signing/routing HTTP connections in an EC2 
deployment in different regions?
   For example, if I am running in AWS ireland, will this cause requests to go 
to us-east-1, even if they then end up redirected back to eu-west-1.
   
   this could mean connections are slower to set up, risk of remote data 
transfer and billing (though the redirections should fix that, right?), and if 
the rules for a deployment prevent out-of-region network traffic, will this 
break. I think we have hit problems related to this in the past.
   
   put differently: is anything special happening with the default  "null" 
endpoint and Ec2 metadata region name provision which we need to know about and 
support? If so, we could allow the region to be set to "" or maybe "ec2" and 
have that revert to the resolve chain


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 613921)
Time Spent: 1h 20m  (was: 1h 10m)

> S3AFS creation fails without region set in ~/.aws/config
> 
>
> Key: HADOOP-17771
> URL: https://issues.apache.org/jira/browse/HADOOP-17771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
> Environment: Host outside EC2 and without the file ~/.aws/config or 
> without a region set in it
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> If you don't have {{fs.s3a.endpoint}} set and lack a region set in
> env var {{AWS_REGION_ENV_VAR}}, system property {{aws.region}} or the file  
> ~/.aws/config
> then S3A FS creation fails with  the message
> "Unable to find a region via the region provider chain."
> This is caused by the move to the AWS S3 client builder API in HADOOP-13551
> This is pretty dramatic and no doubt everyone will be asking "why didn't you 
> notice this?",
> But in fact there are some reasons.
> # when running in EC2, all is well. Meaning our big test runs were all happy.
> # if a developer has fs.s3a.endpoint set for the test bucket, all is well.
>Those of us who work with buckets in the "regions tend to do this, not 
> least because it can save a HEAD request every time an FS is created.
> # if you have a region set in ~/.aws/config then all is well
> reason #3 is the real surprise and the one which has really caught out. Even 
> my tests against buckets in usw-2 through central didn't fail because of 
> course I, like my colleagues, have the AWS S3 client installed locally. This 
> was sufficient to make the problem go away. It is also why this has been an 
> intermittent problem on test clusters outside AWS infra: it really depended 
> on the VM/docker image whether things worked or not.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3133: HADOOP-17771. S3AFS creation fails without region set in ~/.aws/config.

2021-06-23 Thread GitBox


steveloughran commented on pull request #3133:
URL: https://github.com/apache/hadoop/pull/3133#issuecomment-866694906


   One thought here: would you ever want the s3a connector to fall back to that 
bundled region lookup sequence?
   
   I'm wondering in particular if it makes a difference in routing/billing on 
EC2 deployments? 
   
   As of Hadoop 3.3.1 if region=null, endpoint=null the Ec2 metadata is used to 
provide the region info (this is new). 
   With this patch. if endpoint = null we switch to saying region = us-east-1
   will that do bad things for signing/routing HTTP connections in an EC2 
deployment in different regions?
   For example, if I am running in AWS ireland, will this cause requests to go 
to us-east-1, even if they then end up redirected back to eu-west-1.
   
   this could mean connections are slower to set up, risk of remote data 
transfer and billing (though the redirections should fix that, right?), and if 
the rules for a deployment prevent out-of-region network traffic, will this 
break. I think we have hit problems related to this in the past.
   
   put differently: is anything special happening with the default  "null" 
endpoint and Ec2 metadata region name provision which we need to know about and 
support? If so, we could allow the region to be set to "" or maybe "ec2" and 
have that revert to the resolve chain


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3117: HDFS-16076. Avoid using slow DataNodes for reading by sorting locations

2021-06-23 Thread GitBox


tomscut commented on pull request #3117:
URL: https://github.com/apache/hadoop/pull/3117#issuecomment-866681953


   Thanks @tasanuma for your second review. These failed UTs work fine locally.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka commented on pull request #3128: YARN-10826. [UI2] Upgrade Node.js to at least 12.x.

2021-06-23 Thread GitBox


aajisaka commented on pull request #3128:
URL: https://github.com/apache/hadoop/pull/3128#issuecomment-866659346


   Thank you @iwasakims 
   When committing, it's better to include the exact version in the commit 
message instead of "at least 12.x".


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3117: HDFS-16076. Avoid using slow DataNodes for reading by sorting locations

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3117:
URL: https://github.com/apache/hadoop/pull/3117#issuecomment-866649231


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 346m 40s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3117/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  4s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 438m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestDFSShell |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3117/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3117 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 28e07ef0078f 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 74bb4aa36cc2f934905878f70762e628de073e93 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3117/5/testReport/ |
   | Max. process+thread count | 2328 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[jira] [Work logged] (HADOOP-17749) Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17749?focusedWorklogId=613844=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613844
 ]

ASF GitHub Bot logged work on HADOOP-17749:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 07:26
Start Date: 23/Jun/21 07:26
Worklog Time Spent: 10m 
  Work Description: liangxs commented on a change in pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#discussion_r656829433



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java
##
@@ -426,34 +407,44 @@ private synchronized SelectorInfo get(SelectableChannel 
channel)
  * 
  * @param info
  */
-private synchronized void release(SelectorInfo info) {
+private static void release(SelectorInfo info) {
   long now = Time.now();
   trimIdleSelectors(now);
   info.lastActivityTime = now;
-  info.queue.addLast(info);
+  // SelectorInfos in queue are sorted by lastActivityTime
+  providerMap.get(info.provider).addLast(info);
 }
 
+private static AtomicBoolean trimming = new AtomicBoolean(false);
+private static volatile long lastTrimTime = Time.now();
+
 /**
  * Closes selectors that are idle for IDLE_TIMEOUT (10 sec). It does not
  * traverse the whole list, just over the one that have crossed 
  * the timeout.
  */
-private void trimIdleSelectors(long now) {
+private static void trimIdleSelectors(long now) {
+  if (!trimming.compareAndSet(false, true)) {
+return;
+  }
+  if (now - lastTrimTime < IDLE_TIMEOUT / 2) {

Review comment:
   @ferhui I remove this check in new commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 613844)
Time Spent: 2h 40m  (was: 2.5h)

> Remove lock contention in SelectorPool of SocketIOWithTimeout
> -
>
> Key: HADOOP-17749
> URL: https://issues.apache.org/jira/browse/HADOOP-17749
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Xuesen Liang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> *SelectorPool* in 
> hadoop-common/src/main/java/org/apache/hadoop/net/*SocketIOWithTimeout.java* 
> is a point of lock contention.
> For example: 
> {code:java}
> $ grep 'waiting to lock <0x7f7d94006d90>' 63692.jstack | uniq -c
>  1005 - waiting to lock <0x7f7d94006d90> (a 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool)
> {code}
> and the thread stack is as follows:
> {code:java}
> "IPC Client (324579982) connection to /100.10.6.10:60020 from user_00" #14139 
> daemon prio=5 os_prio=0 tid=0x7f7374039000 nid=0x85cc waiting for monitor 
> entry [0x7f6f45939000]
>  java.lang.Thread.State: BLOCKED (on object monitor)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.get(SocketIOWithTimeout.java:390)
>  - waiting to lock <0x7f7d94006d90> (a 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:325)
>  at 
> org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
>  at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
>  at java.io.FilterInputStream.read(FilterInputStream.java:133)
>  at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
>  at java.io.BufferedInputStream.read(BufferedInputStream.java:265)
>  - locked <0x7fa818caf258> (a java.io.BufferedInputStream)
>  at java.io.DataInputStream.readInt(DataInputStream.java:387)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.readResponse(RpcClientImpl.java:967)
>  at 
> org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:568)
> {code}
> We should remove the lock contention.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liangxs commented on a change in pull request #3080: HADOOP-17749. Remove lock contention in SelectorPool of SocketIOWithTimeout

2021-06-23 Thread GitBox


liangxs commented on a change in pull request #3080:
URL: https://github.com/apache/hadoop/pull/3080#discussion_r656829433



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/SocketIOWithTimeout.java
##
@@ -426,34 +407,44 @@ private synchronized SelectorInfo get(SelectableChannel 
channel)
  * 
  * @param info
  */
-private synchronized void release(SelectorInfo info) {
+private static void release(SelectorInfo info) {
   long now = Time.now();
   trimIdleSelectors(now);
   info.lastActivityTime = now;
-  info.queue.addLast(info);
+  // SelectorInfos in queue are sorted by lastActivityTime
+  providerMap.get(info.provider).addLast(info);
 }
 
+private static AtomicBoolean trimming = new AtomicBoolean(false);
+private static volatile long lastTrimTime = Time.now();
+
 /**
  * Closes selectors that are idle for IDLE_TIMEOUT (10 sec). It does not
  * traverse the whole list, just over the one that have crossed 
  * the timeout.
  */
-private void trimIdleSelectors(long now) {
+private static void trimIdleSelectors(long now) {
+  if (!trimming.compareAndSet(false, true)) {
+return;
+  }
+  if (now - lastTrimTime < IDLE_TIMEOUT / 2) {

Review comment:
   @ferhui I remove this check in new commit.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut commented on pull request #3134: HDFS-16085. Move the getPermissionChecker out of the read lock

2021-06-23 Thread GitBox


tomscut commented on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-866596347


   Hi @tasanuma @jojochuang @Hexiaoqiao , could you please take a look. Thanks. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3134: HDFS-16085. Move the getPermissionChecker out of the read lock

2021-06-23 Thread GitBox


hadoop-yetus commented on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-866592953


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 232m 53s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 317m 23s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3134/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3134 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ec760d8d6ec5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0ddcd2458c2801a82b6da1c53bdb88bef2de0659 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3134/1/testReport/ |
   | Max. process+thread count | 3180 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3134/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


[jira] [Work logged] (HADOOP-17714) ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17714?focusedWorklogId=613834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613834
 ]

ASF GitHub Bot logged work on HADOOP-17714:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 07:13
Start Date: 23/Jun/21 07:13
Worklog Time Spent: 10m 
  Work Description: snehavarma edited a comment on pull request #3126:
URL: https://github.com/apache/hadoop/pull/3126#issuecomment-866589147


   Test Results:
   Regions: FNS account - Eastus & HNS account-Eastus2euap
   
   HNS-OAuth
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 2, Skipped: 52
   
   HNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 67
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 276
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 613834)
Time Spent: 2.5h  (was: 2h 20m)

> ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests 
> fail when triggered with default configs
> --
>
> Key: HADOOP-17714
> URL: https://issues.apache.org/jira/browse/HADOOP-17714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail 
> when triggered with default configs as http is not enabled on gen2 accounts 
> by default.
>  
> Options to fix it:
> tests' config should enforce https by default 
> or the tests should be modified not execute http requests
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snehavarma edited a comment on pull request #3126: HADOOP-17714 ABFS: testBlobBackCompatibility, testRandomRead & WasbAb…

2021-06-23 Thread GitBox


snehavarma edited a comment on pull request #3126:
URL: https://github.com/apache/hadoop/pull/3126#issuecomment-866589147


   Test Results:
   Regions: FNS account - Eastus & HNS account-Eastus2euap
   
   HNS-OAuth
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 2, Skipped: 52
   
   HNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 67
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 276
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17714) ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail when triggered with default configs

2021-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17714?focusedWorklogId=613833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-613833
 ]

ASF GitHub Bot logged work on HADOOP-17714:
---

Author: ASF GitHub Bot
Created on: 23/Jun/21 07:10
Start Date: 23/Jun/21 07:10
Worklog Time Spent: 10m 
  Work Description: snehavarma commented on pull request #3126:
URL: https://github.com/apache/hadoop/pull/3126#issuecomment-866589147


   
   HNS-OAuth
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 2, Skipped: 52
   
   HNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 67
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 276
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 613833)
Time Spent: 2h 20m  (was: 2h 10m)

> ABFS: testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests 
> fail when triggered with default configs
> --
>
> Key: HADOOP-17714
> URL: https://issues.apache.org/jira/browse/HADOOP-17714
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Reporter: Sneha Varma
>Assignee: Sneha Varma
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> testBlobBackCompatibility, testRandomRead & WasbAbfsCompatibility tests fail 
> when triggered with default configs as http is not enabled on gen2 accounts 
> by default.
>  
> Options to fix it:
> tests' config should enforce https by default 
> or the tests should be modified not execute http requests
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snehavarma commented on pull request #3126: HADOOP-17714 ABFS: testBlobBackCompatibility, testRandomRead & WasbAb…

2021-06-23 Thread GitBox


snehavarma commented on pull request #3126:
URL: https://github.com/apache/hadoop/pull/3126#issuecomment-866589147


   
   HNS-OAuth
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 2, Skipped: 52
   
   HNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 67
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:84
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40
   
   NonHNS-SharedKey
   
Results:

   Tests run: 98, Failures: 0, Errors: 0, Skipped: 0
Results:

   Tests run: 556, Failures: 0, Errors: 0, Skipped: 276
Results:

   Errors: 
 
ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:62->testReadWriteAndSeek:78
 » TestTimedOut
 
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
 
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut

   Tests run: 265, Failures: 0, Errors: 3, Skipped: 40


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-06-23 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367912#comment-17367912
 ] 

Szilard Nemeth edited comment on HADOOP-15327 at 6/23/21, 7:00 AM:
---

Hey [~weichiu],
Thanks for putting the exceprt here. This could be fixed in parallel, I would 
be glad if you could point me to the config that needs to be changed.
Currently, I'm working on the test issues produced by the build that ran 
against patch003: 
hadoop.mapred.TestReduceFetchFromPartialMem
hadoop.mapred.TestReduceFetch
There are jiras related to these tests but checked the logs and saw very 
suspicious things and it pointed me to a code defect.
I will upload a next patch soon along with explanation of what has been changed 
since patch004.
Hopefully, this can be the last one and I can finally start testing on a 
cluster. 
Will also make sure of creating proper manual testing documentation + 
collecting the test evidence.
I wouldn't expect any production issues (fingers crossed) as test coverage is 
quite good and while I have been fixing the tests, I gained a lot of code 
knowledge, almost being familiar with the ShuffleHandler inside and out.


was (Author: snemeth):
Hey [~weichiu],
Thanks for putting the exceprt here. This could be fixed in parallel, I would 
be glad if you could point me to the config that needs to be changed.
Currently, I'm working on the test issues produced by the build that ran 
against patch003: 
hadoop.mapred.TestReduceFetchFromPartialMem
hadoop.mapred.TestReduceFetch
There are jiras related to these tests but checked the logs and saw very 
suspicious things and it pointed me to a code defect.
I will upload a next patch soon along with explanation of what has been changed 
since patch004.
Hopefully, this can be the last one and I can finally start testing on a 
cluster. I wouldn't expect any production issues (fingers crossed) as test 
coverage is quite good and while I have been fixing the tests, I gained a lot 
of code knowledge, almost being familiar with the ShuffleHandler inside and out.

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log
>
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15327) Upgrade MR ShuffleHandler to use Netty4

2021-06-23 Thread Szilard Nemeth (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17367912#comment-17367912
 ] 

Szilard Nemeth commented on HADOOP-15327:
-

Hey [~weichiu],
Thanks for putting the exceprt here. This could be fixed in parallel, I would 
be glad if you could point me to the config that needs to be changed.
Currently, I'm working on the test issues produced by the build that ran 
against patch003: 
hadoop.mapred.TestReduceFetchFromPartialMem
hadoop.mapred.TestReduceFetch
There are jiras related to these tests but checked the logs and saw very 
suspicious things and it pointed me to a code defect.
I will upload a next patch soon along with explanation of what has been changed 
since patch004.
Hopefully, this can be the last one and I can finally start testing on a 
cluster. I wouldn't expect any production issues (fingers crossed) as test 
coverage is quite good and while I have been fixing the tests, I gained a lot 
of code knowledge, almost being familiar with the ShuffleHandler inside and out.

> Upgrade MR ShuffleHandler to use Netty4
> ---
>
> Key: HADOOP-15327
> URL: https://issues.apache.org/jira/browse/HADOOP-15327
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Xiaoyu Yao
>Assignee: Szilard Nemeth
>Priority: Major
> Attachments: HADOOP-15327.001.patch, HADOOP-15327.002.patch, 
> HADOOP-15327.003.patch, HADOOP-15327.004.patch, 
> getMapOutputInfo_BlockingOperationException_awaitUninterruptibly.log
>
>
> This way, we can remove the dependencies on the netty3 (jboss.netty)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-06-23 Thread Bogdan Stolojan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bogdan Stolojan reassigned HADOOP-17139:


Assignee: Bogdan Stolojan

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Assignee: Bogdan Stolojan
>Priority: Minor
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >