[jira] [Commented] (HADOOP-19189) ITestS3ACommitterFactory failing

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853042#comment-17853042
 ] 

ASF GitHub Bot commented on HADOOP-19189:
-

virajjasani commented on code in PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#discussion_r1630727716


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java:
##
@@ -72,121 +85,156 @@ public class ITestS3ACommitterFactory extends 
AbstractCommitITest {
* Parameterized list of bindings of committer name in config file to
* expected class instantiated.
*/
-  private static final Object[][] bindings = {
-  {COMMITTER_NAME_FILE, FileOutputCommitter.class},
-  {COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class},
-  {COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class},
-  {InternalCommitterConstants.COMMITTER_NAME_STAGING,
-  StagingCommitter.class},
-  {COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class}
+  private static final Object[][] BINDINGS = {
+  {"", "", FileOutputCommitter.class, "Default Binding"},
+  {COMMITTER_NAME_FILE, "", FileOutputCommitter.class, "File committer in 
FS"},
+  {COMMITTER_NAME_PARTITIONED, "", PartitionedStagingCommitter.class,
+  "partitoned committer in FS"},
+  {COMMITTER_NAME_STAGING, "", StagingCommitter.class, "staging committer 
in FS"},
+  {COMMITTER_NAME_MAGIC, "", MagicS3GuardCommitter.class, "magic committer 
in FS"},
+  {COMMITTER_NAME_DIRECTORY, "", DirectoryStagingCommitter.class, "Dir 
committer in FS"},
+  {INVALID_NAME, "", null, "invalid committer in FS"},
+
+  {"", COMMITTER_NAME_FILE, FileOutputCommitter.class, "File committer in 
task"},
+  {"", COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class,
+  "partioned committer in task"},
+  {"", COMMITTER_NAME_STAGING, StagingCommitter.class, "staging committer 
in task"},
+  {"", COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class, "magic committer 
in task"},
+  {"", COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class, "Dir 
committer in task"},
+  {"", INVALID_NAME, null, "invalid committer in task"},
   };
 
   /**
-   * This is a ref to the FS conf, so changes here are visible
-   * to callers querying the FS config.
+   * Test array for parameterized test runs.
+   *
+   * @return the committer binding for this run.
*/
-  private Configuration filesystemConfRef;
-
-  private Configuration taskConfRef;
+  @Parameterized.Parameters(name = "{3}-fs=[{0}]-task=[{1}]-[{2}]")
+  public static Collection params() {
+return Arrays.asList(BINDINGS);
+  }
 
-  @Override
-  public void setup() throws Exception {
-super.setup();
-jobId = randomJobId();
-attempt0 = "attempt_" + jobId + "_m_00_0";
-taskAttempt0 = TaskAttemptID.forName(attempt0);
+  /**
+   * Name of committer to set in filesystem config. If "" do not set one.
+   */
+  private final String fsCommitterName;
 
-outDir = path(getMethodName());
-factory = new S3ACommitterFactory();
-Configuration conf = new Configuration();
-conf.set(FileOutputFormat.OUTDIR, outDir.toUri().toString());
-conf.set(MRJobConfig.TASK_ATTEMPT_ID, attempt0);
-conf.setInt(MRJobConfig.APPLICATION_ATTEMPT_ID, 1);
-filesystemConfRef = getFileSystem().getConf();
-tContext = new TaskAttemptContextImpl(conf, taskAttempt0);
-taskConfRef = tContext.getConfiguration();
-  }
+  /**
+   * Name of committer to set in job config.
+   */
+  private final String jobCommitterName;
 
-  @Test
-  public void testEverything() throws Throwable {
-testImplicitFileBinding();
-testBindingsInTask();
-testBindingsInFSConfig();
-testInvalidFileBinding();
-testInvalidTaskBinding();
-  }
+  /**
+   * Expected committer class.
+   * If null: an exception is expected
+   */
+  private final Class committerClass;
 
   /**
-   * Verify that if all config options are unset, the FileOutputCommitter
-   *
-   * is returned.
+   * Description from parameters, simply for thread names to be more 
informative.
*/
-  public void testImplicitFileBinding() throws Throwable {
-taskConfRef.unset(FS_S3A_COMMITTER_NAME);
-filesystemConfRef.unset(FS_S3A_COMMITTER_NAME);
-assertFactoryCreatesExpectedCommitter(FileOutputCommitter.class);
-  }
+  private final String description;
 
   /**
-   * Verify that task bindings are picked up.
+   * Create a parameterized instance.
+   * @param fsCommitterName committer to set in filesystem config
+   * @param jobCommitterName committer to set in job config
+   * @param committerClass expected committer class
+   * @param description debug text for thread names.
*/
-  public void testBindingsInTask() throws Throwable {
-// set this to an invalid value to be confident it is not
-// being checked.
-filesystemCo

Re: [PR] HADOOP-19189. ITestS3ACommitterFactory failing [hadoop]

2024-06-06 Thread via GitHub


virajjasani commented on code in PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#discussion_r1630727716


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java:
##
@@ -72,121 +85,156 @@ public class ITestS3ACommitterFactory extends 
AbstractCommitITest {
* Parameterized list of bindings of committer name in config file to
* expected class instantiated.
*/
-  private static final Object[][] bindings = {
-  {COMMITTER_NAME_FILE, FileOutputCommitter.class},
-  {COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class},
-  {COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class},
-  {InternalCommitterConstants.COMMITTER_NAME_STAGING,
-  StagingCommitter.class},
-  {COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class}
+  private static final Object[][] BINDINGS = {
+  {"", "", FileOutputCommitter.class, "Default Binding"},
+  {COMMITTER_NAME_FILE, "", FileOutputCommitter.class, "File committer in 
FS"},
+  {COMMITTER_NAME_PARTITIONED, "", PartitionedStagingCommitter.class,
+  "partitoned committer in FS"},
+  {COMMITTER_NAME_STAGING, "", StagingCommitter.class, "staging committer 
in FS"},
+  {COMMITTER_NAME_MAGIC, "", MagicS3GuardCommitter.class, "magic committer 
in FS"},
+  {COMMITTER_NAME_DIRECTORY, "", DirectoryStagingCommitter.class, "Dir 
committer in FS"},
+  {INVALID_NAME, "", null, "invalid committer in FS"},
+
+  {"", COMMITTER_NAME_FILE, FileOutputCommitter.class, "File committer in 
task"},
+  {"", COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class,
+  "partioned committer in task"},
+  {"", COMMITTER_NAME_STAGING, StagingCommitter.class, "staging committer 
in task"},
+  {"", COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class, "magic committer 
in task"},
+  {"", COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class, "Dir 
committer in task"},
+  {"", INVALID_NAME, null, "invalid committer in task"},
   };
 
   /**
-   * This is a ref to the FS conf, so changes here are visible
-   * to callers querying the FS config.
+   * Test array for parameterized test runs.
+   *
+   * @return the committer binding for this run.
*/
-  private Configuration filesystemConfRef;
-
-  private Configuration taskConfRef;
+  @Parameterized.Parameters(name = "{3}-fs=[{0}]-task=[{1}]-[{2}]")
+  public static Collection params() {
+return Arrays.asList(BINDINGS);
+  }
 
-  @Override
-  public void setup() throws Exception {
-super.setup();
-jobId = randomJobId();
-attempt0 = "attempt_" + jobId + "_m_00_0";
-taskAttempt0 = TaskAttemptID.forName(attempt0);
+  /**
+   * Name of committer to set in filesystem config. If "" do not set one.
+   */
+  private final String fsCommitterName;
 
-outDir = path(getMethodName());
-factory = new S3ACommitterFactory();
-Configuration conf = new Configuration();
-conf.set(FileOutputFormat.OUTDIR, outDir.toUri().toString());
-conf.set(MRJobConfig.TASK_ATTEMPT_ID, attempt0);
-conf.setInt(MRJobConfig.APPLICATION_ATTEMPT_ID, 1);
-filesystemConfRef = getFileSystem().getConf();
-tContext = new TaskAttemptContextImpl(conf, taskAttempt0);
-taskConfRef = tContext.getConfiguration();
-  }
+  /**
+   * Name of committer to set in job config.
+   */
+  private final String jobCommitterName;
 
-  @Test
-  public void testEverything() throws Throwable {
-testImplicitFileBinding();
-testBindingsInTask();
-testBindingsInFSConfig();
-testInvalidFileBinding();
-testInvalidTaskBinding();
-  }
+  /**
+   * Expected committer class.
+   * If null: an exception is expected
+   */
+  private final Class committerClass;
 
   /**
-   * Verify that if all config options are unset, the FileOutputCommitter
-   *
-   * is returned.
+   * Description from parameters, simply for thread names to be more 
informative.
*/
-  public void testImplicitFileBinding() throws Throwable {
-taskConfRef.unset(FS_S3A_COMMITTER_NAME);
-filesystemConfRef.unset(FS_S3A_COMMITTER_NAME);
-assertFactoryCreatesExpectedCommitter(FileOutputCommitter.class);
-  }
+  private final String description;
 
   /**
-   * Verify that task bindings are picked up.
+   * Create a parameterized instance.
+   * @param fsCommitterName committer to set in filesystem config
+   * @param jobCommitterName committer to set in job config
+   * @param committerClass expected committer class
+   * @param description debug text for thread names.
*/
-  public void testBindingsInTask() throws Throwable {
-// set this to an invalid value to be confident it is not
-// being checked.
-filesystemConfRef.set(FS_S3A_COMMITTER_NAME, "INVALID");
-taskConfRef.set(FS_S3A_COMMITTER_NAME, COMMITTER_NAME_FILE);
-assertFactoryCreatesExpectedCommitter(FileOutputCommitter.class);
-for (Object[] binding : bindings) {
-  taskConfRef.set(FS_S3A_COMMITTE

[jira] [Commented] (HADOOP-19197) S3A: Support AWS KMS Encryption Context

2024-06-06 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853016#comment-17853016
 ] 

Viraj Jasani commented on HADOOP-19197:
---

We need to use it at 3 places: CopyObjectRequest, PutObjectRequest and 
CreateMultipartUploadRequest.

> S3A: Support AWS KMS Encryption Context
> ---
>
> Key: HADOOP-19197
> URL: https://issues.apache.org/jira/browse/HADOOP-19197
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Raphael Azzolini
>Priority: Major
>
> S3A properties allow users to choose the AWS KMS key 
> ({_}fs.s3a.encryption.key{_}) and S3 encryption algorithm to be used 
> (f{_}s.s3a.encryption.algorithm{_}). In addition to the AWS KMS Key, an 
> encryption context can be used as non-secret data that adds additional 
> integrity and authenticity to check the encrypted data. However, there is no 
> option to specify the [AWS KMS Encryption 
> Context|https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context]
>  in S3A.
> In AWS SDK v2 the encryption context in S3 requests is set by the parameter 
> [ssekmsEncryptionContext.|https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/CreateMultipartUploadRequest.Builder.html#ssekmsEncryptionContext(java.lang.String)]
>  It receives a base64-encoded UTF-8 string holding JSON with the encryption 
> context key-value pairs. The value of this parameter could be set by the user 
> in a new property {_}*fs.s3a.encryption.context*{_}, and be stored in the 
> [EncryptionSecrets|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/EncryptionSecrets.java]
>  to later be used when setting the encryption parameters in 
> [RequestFactoryImpl|https://github.com/apache/hadoop/blob/f92a8ab8ae54f11946412904973eb60404dee7ff/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE-2024-23944 [hadoop]

2024-06-06 Thread via GitHub


virajjasani commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2153958403

   Also, imp to note that, the last build has not run a single yarn or 
mapreduce tests 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6675/4/testReport/
   Any patch that updates license-binary file is expected to run the whole test 
suite with builds taking 24+ hr (e.g. #6830)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853012#comment-17853012
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1630628608


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -63,58 +64,35 @@ protected AzureADToken refreshToken() throws IOException {
 return token;
   }
 
-  /**
-   * Gets the Azure AD token from a client assertion in JWT format.
-   * This method exists to make unit testing possible.
-   *
-   * @param clientAssertion the client assertion.
-   * @return the Azure AD token.
-   * @throws IOException if there is a failure in connecting to Azure AD.
-   */
-  @VisibleForTesting
-  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
-return AzureADAuthenticator
-.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
-  }
-
   /**
* Checks if the token is about to expire as per base expiry logic.
-   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   * Otherwise, expire if there is a clock skew issue in the system.
*
* @return true if the token is expiring in next 1 hour or if a token has
* never been fetched
*/
   @Override
   protected boolean isTokenAboutToExpire() {
-return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
-  }
-
-  /**
-   * Checks to see if enough time has elapsed since the last token refresh.
-   *
-   * @return true if the token was last refreshed more than an hour ago.
-   */
-  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
-if (getTokenFetchTime() == -1) {
+if (tokenFetchTime == -1 || super.isTokenAboutToExpire()) {
   return true;
 }
-boolean expiring = false;
+
+// In case of, any clock skew issues, refresh token.
 long elapsedTimeSinceLastTokenRefreshInMillis =
-System.currentTimeMillis() - getTokenFetchTime();
-// In case token is not refreshed for 1 hr or any clock skew issues,
-// refresh token.
-expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR
-|| elapsedTimeSinceLastTokenRefreshInMillis < 0;
+System.currentTimeMillis() - tokenFetchTime;
+boolean expiring = elapsedTimeSinceLastTokenRefreshInMillis < 0;

Review Comment:
   Not necessarily. It is just there for a cleaner code.
   If we get rid of it, we will have this replaced with 
`elapsedTimeSinceLastTokenRefreshInMillis < 0` at two places.
   
   Seems okay to me that way as well.
   Will take this,





> ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS
> -
>
> Key: HADOOP-18610
> URL: https://issues.apache.org/jira/browse/HADOOP-18610
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.4
>Reporter: Haifeng Chen
>Assignee: Anuj Modi
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HADOOP-18610-preview.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In Jan 2023, Microsoft Azure AKS replaced its original pod-managed identity 
> with with [Azure Active Directory (Azure AD) workload 
> identities|https://learn.microsoft.com/en-us/azure/active-directory/develop/workload-identities-overview]
>  (preview), which integrate with the Kubernetes native capabilities to 
> federate with any external identity providers. This approach is simpler to 
> use and deploy.
> Refer to 
> [https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview|https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview.]
>  and [https://azure.github.io/azure-workload-identity/docs/introduction.html] 
> for more details.
> The basic use scenario is to access Azure cloud resources (such as cloud 
> storage) from Kubernetes (such as AKS) workload using Azure managed identity 
> federated with Kubernetes service account. The credential environment 
> variables in pod projected by Azure AD workload identity are like following:
> AZURE_AUTHORITY_HOST: (Injected by the webhook, 
> [https://login.microsoftonline.com/])
> AZURE_CLIENT_ID: (Injected by the webhook)
> AZURE_TENANT_ID: (Injected by the webhook)
> AZURE_FEDERATED_TOKEN_FILE: (Injected by the webhook, 
> /var/run/secrets/azure/tokens/azure-identity-token)
> The token in the file pointed by AZURE_FEDERATED_TOKEN_FILE is a JWT (JASON 
> Web Token) client assertion token which we can use to request to 
> AZURE_AUTHORITY_HOST (url is  AZURE_AUTHORITY_HOST + tenantId + 
> "/oauth2/v2.0/token")  for a AD token which

Re: [PR] HADOOP-18610: [ABFS] OAuth2 Token Provider support for Azure Workload Identity [hadoop]

2024-06-06 Thread via GitHub


anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1630628608


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -63,58 +64,35 @@ protected AzureADToken refreshToken() throws IOException {
 return token;
   }
 
-  /**
-   * Gets the Azure AD token from a client assertion in JWT format.
-   * This method exists to make unit testing possible.
-   *
-   * @param clientAssertion the client assertion.
-   * @return the Azure AD token.
-   * @throws IOException if there is a failure in connecting to Azure AD.
-   */
-  @VisibleForTesting
-  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
-return AzureADAuthenticator
-.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
-  }
-
   /**
* Checks if the token is about to expire as per base expiry logic.
-   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   * Otherwise, expire if there is a clock skew issue in the system.
*
* @return true if the token is expiring in next 1 hour or if a token has
* never been fetched
*/
   @Override
   protected boolean isTokenAboutToExpire() {
-return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
-  }
-
-  /**
-   * Checks to see if enough time has elapsed since the last token refresh.
-   *
-   * @return true if the token was last refreshed more than an hour ago.
-   */
-  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
-if (getTokenFetchTime() == -1) {
+if (tokenFetchTime == -1 || super.isTokenAboutToExpire()) {
   return true;
 }
-boolean expiring = false;
+
+// In case of, any clock skew issues, refresh token.
 long elapsedTimeSinceLastTokenRefreshInMillis =
-System.currentTimeMillis() - getTokenFetchTime();
-// In case token is not refreshed for 1 hr or any clock skew issues,
-// refresh token.
-expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR
-|| elapsedTimeSinceLastTokenRefreshInMillis < 0;
+System.currentTimeMillis() - tokenFetchTime;
+boolean expiring = elapsedTimeSinceLastTokenRefreshInMillis < 0;

Review Comment:
   Not necessarily. It is just there for a cleaner code.
   If we get rid of it, we will have this replaced with 
`elapsedTimeSinceLastTokenRefreshInMillis < 0` at two places.
   
   Seems okay to me that way as well.
   Will take this,



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853010#comment-17853010
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anujmodi2021 commented on PR #6855:
URL: https://github.com/apache/hadoop/pull/6855#issuecomment-2153933059

   @steveloughran @mukund-thakur 
   This should be good to merge.
   
   Thanks




> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [Backport to 3.4] [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-06-06 Thread via GitHub


anujmodi2021 commented on PR #6855:
URL: https://github.com/apache/hadoop/pull/6855#issuecomment-2153933059

   @steveloughran @mukund-thakur 
   This should be good to merge.
   
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19178) WASB Driver Deprecation and eventual removal

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853009#comment-17853009
 ] 

ASF GitHub Bot commented on HADOOP-19178:
-

anujmodi2021 commented on PR #6862:
URL: https://github.com/apache/hadoop/pull/6862#issuecomment-2153929132

   Hi @steveloughran 
   Please review this WASB Deprecation Announcement PR and let me know if we 
need to do anything else.
   
   Do we need to mark AzureNativeFileSystem as @deprecated as well in trunk or 
any release branches??
   Thanks




> WASB Driver Deprecation and eventual removal
> 
>
> Key: HADOOP-19178
> URL: https://issues.apache.org/jira/browse/HADOOP-19178
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> *WASB Driver*
> WASB driver was developed to support FNS (FlatNameSpace) Azure Storage 
> accounts. FNS accounts do not honor File-Folder syntax. HDFS Folder 
> operations hence are mimicked at client side by WASB driver and certain 
> folder operations like Rename and Delete can lead to lot of IOPs with 
> client-side enumeration and orchestration of rename/delete operation blob by 
> blob. It was not ideal for other APIs too as initial checks for path is a 
> file or folder needs to be done over multiple metadata calls. These led to a 
> degraded performance.
> To provide better service to Analytics customers, Microsoft released ADLS 
> Gen2 which are HNS (Hierarchical Namespace) , i.e File-Folder aware store. 
> ABFS driver was designed to overcome the inherent deficiencies of WASB and 
> customers were informed to migrate to ABFS driver.
> *Customers who still use the legacy WASB driver and the challenges they face* 
> Some of our customers have not migrated to the ABFS driver yet and continue 
> to use the legacy WASB driver with FNS accounts.  
> These customers face the following challenges: 
>  * They cannot leverage the optimizations and benefits of the ABFS driver.
>  * They need to deal with the compatibility issues should the files and 
> folders were modified with the legacy WASB driver and the ABFS driver 
> concurrently in a phased transition situation.
>  * There are differences for supported features for FNS and HNS over ABFS 
> Driver
>  * In certain cases, they must perform a significant amount of re-work on 
> their workloads to migrate to the ABFS driver, which is available only on HNS 
> enabled accounts in a fully tested and supported scenario.
> *Deprecation plans for WASB*
> We are introducing a new feature that will enable the ABFS driver to support 
> FNS accounts (over BlobEndpoint) using the ABFS scheme. This feature will 
> enable customers to use the ABFS driver to interact with data stored in GPv2 
> (General Purpose v2) storage accounts. 
> With this feature, the customers who still use the legacy WASB driver will be 
> able to migrate to the ABFS driver without much re-work on their workloads. 
> They will however need to change the URIs from the WASB scheme to the ABFS 
> scheme. 
> Once ABFS driver has built FNS support capability to migrate WASB customers, 
> WASB driver will be declared deprecated in OSS documentation and marked for 
> removal in next major release. This will remove any ambiguity for new 
> customer onboards as there will be only one Microsoft driver for Azure 
> Storage and migrating customers will get SLA bound support for driver and 
> service, which was not guaranteed over WASB.
>  We anticipate that this feature will serve as a stepping stone for customers 
> to move to HNS enabled accounts with the ABFS driver, which is our 
> recommended stack for big data analytics on ADLS Gen2. 
> *Any Impact for* *existing customers who are using ADLS Gen2 (HNS enabled 
> account) with ABFS driver* *?*
> This feature does not impact the existing customers who are using ADLS Gen2 
> (HNS enabled account) with ABFS driver.
> They do not need to make any changes to their workloads or configurations. 
> They will still enjoy the benefits of HNS, such as atomic operations, 
> fine-grained access control, scalability, and performance. 
> *Official recommendation*
> Microsoft continues to recommend all Big Data and Analytics customers to use 
> Azure Data Lake Gen2 (ADLS Gen2) using the ABFS driver and will continue to 
> optimize this scenario in future, we believe that this new option will help 
> all those customers to transition to a supported scenario immediately, while 
> they plan to ultimately move to ADLS Gen2 (HNS enabled account).
>  *New Authentication options that a WASB to ABFS Driver migrating customer 
> will get*
> Below aut

Re: [PR] HADOOP-19178: [WASB Deprecation] Updating Documentation on Upcoming Plans for Hadoop-Azure [hadoop]

2024-06-06 Thread via GitHub


anujmodi2021 commented on PR #6862:
URL: https://github.com/apache/hadoop/pull/6862#issuecomment-2153929132

   Hi @steveloughran 
   Please review this WASB Deprecation Announcement PR and let me know if we 
need to do anything else.
   
   Do we need to mark AzureNativeFileSystem as @deprecated as well in trunk or 
any release branches??
   Thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18610) ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18610?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853007#comment-17853007
 ] 

ASF GitHub Bot commented on HADOOP-18610:
-

anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1630628608


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -63,58 +64,35 @@ protected AzureADToken refreshToken() throws IOException {
 return token;
   }
 
-  /**
-   * Gets the Azure AD token from a client assertion in JWT format.
-   * This method exists to make unit testing possible.
-   *
-   * @param clientAssertion the client assertion.
-   * @return the Azure AD token.
-   * @throws IOException if there is a failure in connecting to Azure AD.
-   */
-  @VisibleForTesting
-  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
-return AzureADAuthenticator
-.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
-  }
-
   /**
* Checks if the token is about to expire as per base expiry logic.
-   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   * Otherwise, expire if there is a clock skew issue in the system.
*
* @return true if the token is expiring in next 1 hour or if a token has
* never been fetched
*/
   @Override
   protected boolean isTokenAboutToExpire() {
-return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
-  }
-
-  /**
-   * Checks to see if enough time has elapsed since the last token refresh.
-   *
-   * @return true if the token was last refreshed more than an hour ago.
-   */
-  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
-if (getTokenFetchTime() == -1) {
+if (tokenFetchTime == -1 || super.isTokenAboutToExpire()) {
   return true;
 }
-boolean expiring = false;
+
+// In case of, any clock skew issues, refresh token.
 long elapsedTimeSinceLastTokenRefreshInMillis =
-System.currentTimeMillis() - getTokenFetchTime();
-// In case token is not refreshed for 1 hr or any clock skew issues,
-// refresh token.
-expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR
-|| elapsedTimeSinceLastTokenRefreshInMillis < 0;
+System.currentTimeMillis() - tokenFetchTime;
+boolean expiring = elapsedTimeSinceLastTokenRefreshInMillis < 0;

Review Comment:
   Not necessarily. It is just there for a cleaner code.
   If we get rid of it, we will have this replaced with 
`elapsedTimeSinceLastTokenRefreshInMillis < 0` at two places.
   
   Do you want me to do that??





> ABFS OAuth2 Token Provider to support Azure Workload Identity for AKS
> -
>
> Key: HADOOP-18610
> URL: https://issues.apache.org/jira/browse/HADOOP-18610
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 3.3.4
>Reporter: Haifeng Chen
>Assignee: Anuj Modi
>Priority: Critical
>  Labels: pull-request-available
> Attachments: HADOOP-18610-preview.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> In Jan 2023, Microsoft Azure AKS replaced its original pod-managed identity 
> with with [Azure Active Directory (Azure AD) workload 
> identities|https://learn.microsoft.com/en-us/azure/active-directory/develop/workload-identities-overview]
>  (preview), which integrate with the Kubernetes native capabilities to 
> federate with any external identity providers. This approach is simpler to 
> use and deploy.
> Refer to 
> [https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview|https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview.]
>  and [https://azure.github.io/azure-workload-identity/docs/introduction.html] 
> for more details.
> The basic use scenario is to access Azure cloud resources (such as cloud 
> storage) from Kubernetes (such as AKS) workload using Azure managed identity 
> federated with Kubernetes service account. The credential environment 
> variables in pod projected by Azure AD workload identity are like following:
> AZURE_AUTHORITY_HOST: (Injected by the webhook, 
> [https://login.microsoftonline.com/])
> AZURE_CLIENT_ID: (Injected by the webhook)
> AZURE_TENANT_ID: (Injected by the webhook)
> AZURE_FEDERATED_TOKEN_FILE: (Injected by the webhook, 
> /var/run/secrets/azure/tokens/azure-identity-token)
> The token in the file pointed by AZURE_FEDERATED_TOKEN_FILE is a JWT (JASON 
> Web Token) client assertion token which we can use to request to 
> AZURE_AUTHORITY_HOST (url is  AZURE_AUTHORITY_HOST + tenantId + 
> "/oauth2/v2.0/token")  for a AD token which can be used to directly a

Re: [PR] HADOOP-18610: [ABFS] OAuth2 Token Provider support for Azure Workload Identity [hadoop]

2024-06-06 Thread via GitHub


anujmodi2021 commented on code in PR #6787:
URL: https://github.com/apache/hadoop/pull/6787#discussion_r1630628608


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/oauth2/WorkloadIdentityTokenProvider.java:
##
@@ -63,58 +64,35 @@ protected AzureADToken refreshToken() throws IOException {
 return token;
   }
 
-  /**
-   * Gets the Azure AD token from a client assertion in JWT format.
-   * This method exists to make unit testing possible.
-   *
-   * @param clientAssertion the client assertion.
-   * @return the Azure AD token.
-   * @throws IOException if there is a failure in connecting to Azure AD.
-   */
-  @VisibleForTesting
-  AzureADToken getTokenUsingJWTAssertion(String clientAssertion) throws 
IOException {
-return AzureADAuthenticator
-.getTokenUsingJWTAssertion(authEndpoint, clientId, clientAssertion);
-  }
-
   /**
* Checks if the token is about to expire as per base expiry logic.
-   * Otherwise, try to expire if enough time has elapsed since the last 
refresh.
+   * Otherwise, expire if there is a clock skew issue in the system.
*
* @return true if the token is expiring in next 1 hour or if a token has
* never been fetched
*/
   @Override
   protected boolean isTokenAboutToExpire() {
-return super.isTokenAboutToExpire() || 
hasEnoughTimeElapsedSinceLastRefresh();
-  }
-
-  /**
-   * Checks to see if enough time has elapsed since the last token refresh.
-   *
-   * @return true if the token was last refreshed more than an hour ago.
-   */
-  protected boolean hasEnoughTimeElapsedSinceLastRefresh() {
-if (getTokenFetchTime() == -1) {
+if (tokenFetchTime == -1 || super.isTokenAboutToExpire()) {
   return true;
 }
-boolean expiring = false;
+
+// In case of, any clock skew issues, refresh token.
 long elapsedTimeSinceLastTokenRefreshInMillis =
-System.currentTimeMillis() - getTokenFetchTime();
-// In case token is not refreshed for 1 hr or any clock skew issues,
-// refresh token.
-expiring = elapsedTimeSinceLastTokenRefreshInMillis >= ONE_HOUR
-|| elapsedTimeSinceLastTokenRefreshInMillis < 0;
+System.currentTimeMillis() - tokenFetchTime;
+boolean expiring = elapsedTimeSinceLastTokenRefreshInMillis < 0;

Review Comment:
   Not necessarily. It is just there for a cleaner code.
   If we get rid of it, we will have this replaced with 
`elapsedTimeSinceLastTokenRefreshInMillis < 0` at two places.
   
   Do you want me to do that??



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17543. [ARR]: AsyncUtil makes asynchronous code more concise and easier. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise commented on PR #6868:
URL: https://github.com/apache/hadoop/pull/6868#issuecomment-2153787222

   @goiri @simbadzina @Hexiaoqiao @sjlee If you have time, please help to 
review this PR, thank you!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17531. [Discuss] RBF: Aynchronous router RPC. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise commented on PR #6838:
URL: https://github.com/apache/hadoop/pull/6838#issuecomment-2153746299

   Thank you again for your attention. I will split this huge PR into small 
PRs. You can review the PRs in the subtasks. I will close this huge PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17531. [Discuss] RBF: Aynchronous router RPC. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise closed pull request #6838: HDFS-17531. [Discuss] RBF: Aynchronous 
router RPC.
URL: https://github.com/apache/hadoop/pull/6838


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17545. [ARR] router async rpc client. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise commented on PR #6871:
URL: https://github.com/apache/hadoop/pull/6871#issuecomment-2153743472

   this PR needs to use Asyncutil of HDFS-17543 and async ProtocolPB of 
HDFS-17544, We should first review HDFS-17543 and HDFS-17544 and then review 
this PR. Before HDFS-17543、HDFS-17544 is merged, this PR is only used as an 
example of subsequent use of AsyncUtil、 async ProtocolPB.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17544. [ARR] The router client rpc protocol supports asynchrony. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise commented on PR #6870:
URL: https://github.com/apache/hadoop/pull/6870#issuecomment-2153741838

   > @KeeProMise Hi, sir. The code in this PR contains 
[HDFS-17543](https://issues.apache.org/jira/browse/HDFS-17543)'s change. Can we 
seperate them for better reviewing.
   
   @hfutatzhanghb Hi, It does include HDFS-17543. The main reason is that this 
PR needs to use AsyncUtil of HDFS-17543. We should first review and merge 
HDFS-17543 and then review this PR. Before HDFS-17543 is merged, this PR is 
only used as an example of subsequent use of AsyncUtil.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17543. [ARR]: AsyncUtil makes asynchronous code more concise and easier. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise commented on code in PR #6868:
URL: https://github.com/apache/hadoop/pull/6868#discussion_r1630547741


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/async/AsyncForEachRun.java:
##
@@ -0,0 +1,97 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router.async;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+
+public class AsyncForEachRun implements AsyncRun {

Review Comment:
   @hfutatzhanghb Thanks for your suggestion, I will add javadoc for these 
classes



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19154) upgrade bouncy castle to 1.78.1 due to CVEs

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852993#comment-17852993
 ] 

ASF GitHub Bot commented on HADOOP-19154:
-

hadoop-yetus commented on PR #6866:
URL: https://github.com/apache/hadoop/pull/6866#issuecomment-2153737457

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   9m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 19s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  10m  0s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   9m  0s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  mvnsite  |  14m 43s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   5m  5s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   4m 53s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  shadedclient  |  34m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  21m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   9m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   8m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   9m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   5m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 621m 26s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6866/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 819m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6866/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6866 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient xmllint 
shellcheck shelldocs |
   | uname | Linux 8434123aad69 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 72e1edbdf628b6e582a8e0faf8b5aa2b3d192f9c |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk

Re: [PR] HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs (#6755) [hadoop]

2024-06-06 Thread via GitHub


hadoop-yetus commented on PR #6866:
URL: https://github.com/apache/hadoop/pull/6866#issuecomment-2153737457

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   9m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.4 Compile Tests _ |
   | +0 :ok: |  mvndep  |   3m 58s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  32m 19s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  compile  |  10m  0s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   9m  0s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  mvnsite  |  14m 43s |  |  branch-3.4 passed  |
   | +1 :green_heart: |  javadoc  |   5m  5s |  |  branch-3.4 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   4m 53s |  |  branch-3.4 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  shadedclient  |  34m 10s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  21m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   9m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   8m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   9m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  No new issues.  |
   | +1 :green_heart: |  javadoc  |   4m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   5m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  shadedclient  |  32m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 621m 26s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6866/1/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 819m 36s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6866/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6866 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient xmllint 
shellcheck shelldocs |
   | uname | Linux 8434123aad69 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.4 / 72e1edbdf628b6e582a8e0faf8b5aa2b3d192f9c |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6866/1/testReport/ |
   | Max. process+thre

Re: [PR] HDFS-17544. [ARR] The router client rpc protocol supports asynchrony. [hadoop]

2024-06-06 Thread via GitHub


hfutatzhanghb commented on PR #6870:
URL: https://github.com/apache/hadoop/pull/6870#issuecomment-2153737046

   @KeeProMise Hi, sir. The code in this PR contains HDFS-17543's change.  Can 
we seperate them for better reviewing.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19193) Create orphan commit for website deployment

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852990#comment-17852990
 ] 

ASF GitHub Bot commented on HADOOP-19193:
-

pan3793 commented on PR #6864:
URL: https://github.com/apache/hadoop/pull/6864#issuecomment-2153736606

   > This is the key cause of the really big downloads, isn't it?
   
   should be, although I haven't analyzed the git repo blob, just asserting 
based on experience.
   
   > I still think we should cull old branches
   
   I see your post in the mailing list, well, deleting old release branches 
(<2.6) might be aggressive.
   
   The contribution of release branches to the total volume of the git repo 
should be negligible, I would suggest reserving them but deleting branches like 
`HADOOP-`, `YARN-`.
   
   Additionally, seems that Hadoop creates branches for each release, it does 
not clean as Spark - only cutting branches for minor versions, and creating 
release tags on those branches.
   
   ```
   - branch-3.5 -

> Create orphan commit for website deployment
> ---
>
> Key: HADOOP-19193
> URL: https://issues.apache.org/jira/browse/HADOOP-19193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19193. Create orphan commit for website deployment [hadoop]

2024-06-06 Thread via GitHub


pan3793 commented on PR #6864:
URL: https://github.com/apache/hadoop/pull/6864#issuecomment-2153736606

   > This is the key cause of the really big downloads, isn't it?
   
   should be, although I haven't analyzed the git repo blob, just asserting 
based on experience.
   
   > I still think we should cull old branches
   
   I see your post in the mailing list, well, deleting old release branches 
(<2.6) might be aggressive.
   
   The contribution of release branches to the total volume of the git repo 
should be negligible, I would suggest reserving them but deleting branches like 
`HADOOP-`, `YARN-`.
   
   Additionally, seems that Hadoop creates branches for each release, it does 
not clean as Spark - only cutting branches for minor versions, and creating 
release tags on those branches.
   
   ```
   - branch-3.5 --- v3.5.0-rc0 --- v3.5.0-rc1(v3.5.0) --- 
v3.5.1-rc0 --- ...
^  ^  ^   ^
tagtagtag tag
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852983#comment-17852983
 ] 

ASF GitHub Bot commented on HADOOP-19196:
-

hadoop-yetus commented on PR #6872:
URL: https://github.com/apache/hadoop/pull/6872#issuecomment-2153726855

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  29m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  51m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  17m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  17m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m  4s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 273m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6872/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6872 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c0d6276948d4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 29ad2c7bd0126b33d6ad6a4ae84d6810e58b3362 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6872/1/testReport/ |
   | Max. process+thread count | 2137 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6872/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> Bulk delete api doesn't take the path 

Re: [PR] HADOOP-19196. Allow base path to be deleted as well using Bulk Delete. [hadoop]

2024-06-06 Thread via GitHub


hadoop-yetus commented on PR #6872:
URL: https://github.com/apache/hadoop/pull/6872#issuecomment-2153726855

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  29m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  51m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |  17m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  41m 18s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |  18m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |  17m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 25s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m  4s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 273m 49s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6872/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6872 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c0d6276948d4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 29ad2c7bd0126b33d6ad6a4ae84d6810e58b3362 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6872/1/testReport/ |
   | Max. process+thread count | 2137 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6872/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
u

Re: [PR] HDFS-17543. [ARR]: AsyncUtil makes asynchronous code more concise and easier. [hadoop]

2024-06-06 Thread via GitHub


hfutatzhanghb commented on code in PR #6868:
URL: https://github.com/apache/hadoop/pull/6868#discussion_r1630528668


##
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/async/AsyncForEachRun.java:
##
@@ -0,0 +1,97 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.router.async;
+
+import java.io.IOException;
+import java.util.Iterator;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+
+public class AsyncForEachRun implements AsyncRun {

Review Comment:
   @KeeProMise Hi, sir. It's better to add some commets here to explain what 
this Class can do and `I,T,R` generic type information.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19197) S3A: Support AWS KMS Encryption Context

2024-06-06 Thread Raphael Azzolini (Jira)
Raphael Azzolini created HADOOP-19197:
-

 Summary: S3A: Support AWS KMS Encryption Context
 Key: HADOOP-19197
 URL: https://issues.apache.org/jira/browse/HADOOP-19197
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Raphael Azzolini


S3A properties allow users to choose the AWS KMS key 
({_}fs.s3a.encryption.key{_}) and S3 encryption algorithm to be used 
(f{_}s.s3a.encryption.algorithm{_}). In addition to the AWS KMS Key, an 
encryption context can be used as non-secret data that adds additional 
integrity and authenticity to check the encrypted data. However, there is no 
option to specify the [AWS KMS Encryption 
Context|https://docs.aws.amazon.com/kms/latest/developerguide/concepts.html#encrypt_context]
 in S3A.

In AWS SDK v2 the encryption context in S3 requests is set by the parameter 
[ssekmsEncryptionContext.|https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/model/CreateMultipartUploadRequest.Builder.html#ssekmsEncryptionContext(java.lang.String)]
 It receives a base64-encoded UTF-8 string holding JSON with the encryption 
context key-value pairs. The value of this parameter could be set by the user 
in a new property {_}*fs.s3a.encryption.context*{_}, and be stored in the 
[EncryptionSecrets|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/auth/delegation/EncryptionSecrets.java]
 to later be used when setting the encryption parameters in 
[RequestFactoryImpl|https://github.com/apache/hadoop/blob/f92a8ab8ae54f11946412904973eb60404dee7ff/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java].



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17546. Implementing HostsFileReader timeout [hadoop]

2024-06-06 Thread via GitHub


NyteKnight opened a new pull request, #6873:
URL: https://github.com/apache/hadoop/pull/6873

   ### Description of PR
   In hadoop environments that rely on NAS to house dfs.hosts file, during FS 
hangs (for any reason) the refreshNodes call would infinitely hang (causing a 
blocked thread) until the FS returns. 
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19189) ITestS3ACommitterFactory failing

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852958#comment-17852958
 ] 

ASF GitHub Bot commented on HADOOP-19189:
-

hadoop-yetus commented on PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#issuecomment-2153493109

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  hadoop-tools/hadoop-aws: 
The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 44s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 159m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6857/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6857 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux aff4408221f5 5.15.0-107-generic #117-Ubuntu SMP Fri Apr 26 
12:26:49 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3b7e5ca1a734ed2c0d444b40f93559da10d8c6b3 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6857/3/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6857/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
  

Re: [PR] HADOOP-19189. ITestS3ACommitterFactory failing [hadoop]

2024-06-06 Thread via GitHub


hadoop-yetus commented on PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#issuecomment-2153493109

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  50m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 17s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  hadoop-tools/hadoop-aws: 
The patch generated 0 new + 0 unchanged - 1 fixed = 0 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 44s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 159m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6857/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6857 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux aff4408221f5 5.15.0-107-generic #117-Ubuntu SMP Fri Apr 26 
12:26:49 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3b7e5ca1a734ed2c0d444b40f93559da10d8c6b3 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6857/3/testReport/ |
   | Max. process+thread count | 527 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6857/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about

[jira] [Commented] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852956#comment-17852956
 ] 

ASF GitHub Bot commented on HADOOP-19196:
-

mukund-thakur opened a new pull request, #6872:
URL: https://github.com/apache/hadoop/pull/6872

   Follow up on HADOOP-18679.
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   Re-ran all the implementation of AbstractContractBulkDeleteTest
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Bulk delete api doesn't take the path to delete as the base path
> 
>
> Key: HADOOP-19196
> URL: https://issues.apache.org/jira/browse/HADOOP-19196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>
> If you use the path of the file you intend to delete as the base path, you 
> get an error. This is because the validation requires the list to be of 
> children, but the base path itself should be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19196:

Labels: pull-request-available  (was: )

> Bulk delete api doesn't take the path to delete as the base path
> 
>
> Key: HADOOP-19196
> URL: https://issues.apache.org/jira/browse/HADOOP-19196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>  Labels: pull-request-available
>
> If you use the path of the file you intend to delete as the base path, you 
> get an error. This is because the validation requires the list to be of 
> children, but the base path itself should be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19196. Allow base path to be deleted as well using Bulk Delete. [hadoop]

2024-06-06 Thread via GitHub


mukund-thakur opened a new pull request, #6872:
URL: https://github.com/apache/hadoop/pull/6872

   Follow up on HADOOP-18679.
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   Re-ran all the implementation of AbstractContractBulkDeleteTest
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18679) Add API for bulk/paged delete of files and objects

2024-06-06 Thread Mukund Thakur (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukund Thakur updated HADOOP-18679:
---
Description: 
iceberg and hbase could benefit from being able to give a list of individual 
files to delete -files which may be scattered round the bucket for better read 
peformance.

Add some new optional interface for an object store which allows a caller to 
submit a list of paths to files to delete, where
the expectation is
 * if a path is a file: delete
 * if a path is a dir, outcome undefined
For s3 that'd let us build these into DeleteRequest objects, and submit, 
without any probes first.

{quote}Cherrypicking
{quote}
when cherrypicking, you must include
 * followup commit #6854
 * https://issues.apache.org/jira/browse/HADOOP-19196
 * test fixes HADOOP-19814 and HADOOP-19188

  was:
iceberg and hbase could benefit from being able to give a list of individual 
files to delete -files which may be scattered round the bucket for better read 
peformance. 

Add some new optional interface for an object store which allows a caller to 
submit a list of paths to files to delete, where
the expectation is
* if a path is a file: delete
* if a path is a dir, outcome undefined
For s3 that'd let us build these into DeleteRequest objects, and submit, 
without any probes first.

bq. Cherrypicking

when cherrypicking, you must include

* followup commit #6854
* test fixes HADOOP-19814 and HADOOP-19188



> Add API for bulk/paged delete of files and objects
> --
>
> Key: HADOOP-18679
> URL: https://issues.apache.org/jira/browse/HADOOP-18679
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.5
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> iceberg and hbase could benefit from being able to give a list of individual 
> files to delete -files which may be scattered round the bucket for better 
> read peformance.
> Add some new optional interface for an object store which allows a caller to 
> submit a list of paths to files to delete, where
> the expectation is
>  * if a path is a file: delete
>  * if a path is a dir, outcome undefined
> For s3 that'd let us build these into DeleteRequest objects, and submit, 
> without any probes first.
> {quote}Cherrypicking
> {quote}
> when cherrypicking, you must include
>  * followup commit #6854
>  * https://issues.apache.org/jira/browse/HADOOP-19196
>  * test fixes HADOOP-19814 and HADOOP-19188



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852954#comment-17852954
 ] 

Mukund Thakur commented on HADOOP-19196:


good catch. 

> Bulk delete api doesn't take the path to delete as the base path
> 
>
> Key: HADOOP-19196
> URL: https://issues.apache.org/jira/browse/HADOOP-19196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>
> If you use the path of the file you intend to delete as the base path, you 
> get an error. This is because the validation requires the list to be of 
> children, but the base path itself should be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19116) update to zookeeper client 3.8.4 due to CVE-2024-23944

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852934#comment-17852934
 ] 

ASF GitHub Bot commented on HADOOP-19116:
-

pjfanning commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2153313881

   A lot of these:
   ```
   java.lang.OutOfMemoryError: unable to create new native thread
   ```




> update to zookeeper client 3.8.4 due to  CVE-2024-23944
> ---
>
> Key: HADOOP-19116
> URL: https://issues.apache.org/jira/browse/HADOOP-19116
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: CVE
>Affects Versions: 3.4.0, 3.3.6
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0, 3.4.1
>
>
> https://github.com/advisories/GHSA-r978-9m6m-6gm6



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE-2024-23944 [hadoop]

2024-06-06 Thread via GitHub


pjfanning commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2153313881

   A lot of these:
   ```
   java.lang.OutOfMemoryError: unable to create new native thread
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19116. Update to zookeeper client 3.8.4 due to CVE-2024-23944 [hadoop]

2024-06-06 Thread via GitHub


steveloughran commented on PR #6675:
URL: https://github.com/apache/hadoop/pull/6675#issuecomment-2153298967

   Lot more failures this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19193) Create orphan commit for website deployment

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852928#comment-17852928
 ] 

ASF GitHub Bot commented on HADOOP-19193:
-

steveloughran commented on PR #6864:
URL: https://github.com/apache/hadoop/pull/6864#issuecomment-2153283381

   (I still think we should cull old branches, FWIW)




> Create orphan commit for website deployment
> ---
>
> Key: HADOOP-19193
> URL: https://issues.apache.org/jira/browse/HADOOP-19193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19193. Create orphan commit for website deployment [hadoop]

2024-06-06 Thread via GitHub


steveloughran commented on PR #6864:
URL: https://github.com/apache/hadoop/pull/6864#issuecomment-2153282965

   thanks. This is the key cause of the really big downloads, isn't it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19193) Create orphan commit for website deployment

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852927#comment-17852927
 ] 

ASF GitHub Bot commented on HADOOP-19193:
-

steveloughran commented on PR #6864:
URL: https://github.com/apache/hadoop/pull/6864#issuecomment-2153282965

   thanks. This is the key cause of the really big downloads, isn't it?




> Create orphan commit for website deployment
> ---
>
> Key: HADOOP-19193
> URL: https://issues.apache.org/jira/browse/HADOOP-19193
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Cheng Pan
>Assignee: Cheng Pan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19193. Create orphan commit for website deployment [hadoop]

2024-06-06 Thread via GitHub


steveloughran commented on PR #6864:
URL: https://github.com/apache/hadoop/pull/6864#issuecomment-2153283381

   (I still think we should cull old branches, FWIW)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19093) Improve rate limiting through ABFS in Manifest Committer

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852924#comment-17852924
 ] 

ASF GitHub Bot commented on HADOOP-19093:
-

steveloughran closed pull request #6596: HADOOP-19093. [ABFS] Improve rate 
limiting through ABFS in Manifest Committer
URL: https://github.com/apache/hadoop/pull/6596




> Improve rate limiting through ABFS in Manifest Committer
> 
>
> Key: HADOOP-19093
> URL: https://issues.apache.org/jira/browse/HADOOP-19093
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>
> I need a load test to verify that the rename resilience of the manifest 
> committer actually works as intended
> * test suite with name ILoadTest* prefix (as with s3)
> * parallel test running with many threads doing many renames
> * verify that rename recovery should be detected
> * and that all renames MUST NOT fail.
> maybe also: metrics for this in fs and doc update. 
> Possibly; LogExactlyOnce to warn of load issues



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19093. [ABFS] Improve rate limiting through ABFS in Manifest Committer [hadoop]

2024-06-06 Thread via GitHub


steveloughran closed pull request #6596: HADOOP-19093. [ABFS] Improve rate 
limiting through ABFS in Manifest Committer
URL: https://github.com/apache/hadoop/pull/6596


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19189) ITestS3ACommitterFactory failing

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852922#comment-17852922
 ] 

ASF GitHub Bot commented on HADOOP-19189:
-

steveloughran commented on PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#issuecomment-2153272189

   updated version; tested s3 london




> ITestS3ACommitterFactory failing
> 
>
> Key: HADOOP-19189
> URL: https://issues.apache.org/jira/browse/HADOOP-19189
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> we've had ITestS3ACommitterFactory failing for a while, where it looks like 
> changed committer settings aren't being picked up.
> {code}
> ERROR] 
> ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 
> Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, 
> but got the result: : 
> FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl
> {code}
> I've spent some time looking at it and it is happening because the test sets 
> the fileystem ref for the local test fs, and not that of the filesystem 
> created by the committer, which is where the option is picked up.
> i've tried to parameterize it but things are still playing up and I'm not 
> sure how hard to try to fix.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19189. ITestS3ACommitterFactory failing [hadoop]

2024-06-06 Thread via GitHub


steveloughran commented on PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#issuecomment-2153272189

   updated version; tested s3 london


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19189) ITestS3ACommitterFactory failing

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852921#comment-17852921
 ] 

ASF GitHub Bot commented on HADOOP-19189:
-

steveloughran commented on code in PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#discussion_r1630121834


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java:
##
@@ -72,121 +85,141 @@ public class ITestS3ACommitterFactory extends 
AbstractCommitITest {
* Parameterized list of bindings of committer name in config file to
* expected class instantiated.
*/
-  private static final Object[][] bindings = {
-  {COMMITTER_NAME_FILE, FileOutputCommitter.class},
-  {COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class},
-  {COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class},
-  {InternalCommitterConstants.COMMITTER_NAME_STAGING,
-  StagingCommitter.class},
-  {COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class}
+  private static final Object[][] BINDINGS = {

Review Comment:
   clarified in doc comments



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java:
##
@@ -72,121 +85,141 @@ public class ITestS3ACommitterFactory extends 
AbstractCommitITest {
* Parameterized list of bindings of committer name in config file to
* expected class instantiated.
*/
-  private static final Object[][] bindings = {
-  {COMMITTER_NAME_FILE, FileOutputCommitter.class},
-  {COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class},
-  {COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class},
-  {InternalCommitterConstants.COMMITTER_NAME_STAGING,
-  StagingCommitter.class},
-  {COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class}
+  private static final Object[][] BINDINGS = {
+  {"", "", FileOutputCommitter.class, "Default Binding"},
+  {COMMITTER_NAME_FILE, "", FileOutputCommitter.class, "File committer in 
FS"},
+  {COMMITTER_NAME_PARTITIONED, "", PartitionedStagingCommitter.class,
+  "partitoned committer in FS"},
+  {COMMITTER_NAME_STAGING, "", StagingCommitter.class, "staging committer 
in FS"},
+  {COMMITTER_NAME_MAGIC, "", MagicS3GuardCommitter.class, "magic committer 
in FS"},
+  {COMMITTER_NAME_DIRECTORY, "", DirectoryStagingCommitter.class, "Dir 
committer in FS"},
+  {INVALID_NAME, "", null, "invalid committer in FS"},
+
+  {"", COMMITTER_NAME_FILE, FileOutputCommitter.class, "File committer in 
task"},
+  {"", COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class,
+  "partioned committer in task"},
+  {"", COMMITTER_NAME_STAGING, StagingCommitter.class, "staging committer 
in task"},
+  {"", COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class, "magic committer 
in task"},
+  {"", COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class, "Dir 
committer in task"},
+  {"", INVALID_NAME, null, "invalid committer in task"},
   };
 
   /**
-   * This is a ref to the FS conf, so changes here are visible
-   * to callers querying the FS config.
+   * Test array for parameterized test runs.
+   *
+   * @return the committer binding for this run.
*/
-  private Configuration filesystemConfRef;
-
-  private Configuration taskConfRef;
-
-  @Override
-  public void setup() throws Exception {
-super.setup();
-jobId = randomJobId();
-attempt0 = "attempt_" + jobId + "_m_00_0";
-taskAttempt0 = TaskAttemptID.forName(attempt0);
-
-outDir = path(getMethodName());
-factory = new S3ACommitterFactory();
-Configuration conf = new Configuration();
-conf.set(FileOutputFormat.OUTDIR, outDir.toUri().toString());
-conf.set(MRJobConfig.TASK_ATTEMPT_ID, attempt0);
-conf.setInt(MRJobConfig.APPLICATION_ATTEMPT_ID, 1);
-filesystemConfRef = getFileSystem().getConf();
-tContext = new TaskAttemptContextImpl(conf, taskAttempt0);
-taskConfRef = tContext.getConfiguration();
-  }
-
-  @Test
-  public void testEverything() throws Throwable {
-testImplicitFileBinding();
-testBindingsInTask();
-testBindingsInFSConfig();
-testInvalidFileBinding();
-testInvalidTaskBinding();
+  @Parameterized.Parameters(name = "{3}-fs=[{0}]-task=[{1}]-[{2}]")
+  public static Collection params() {
+return Arrays.asList(BINDINGS);
   }
 
   /**
-   * Verify that if all config options are unset, the FileOutputCommitter
-   *
-   * is returned.
+   * Name of committer to set in fs config. If "" do not set one.
*/
-  public void testImplicitFileBinding() throws Throwable {
-taskConfRef.unset(FS_S3A_COMMITTER_NAME);
-filesystemConfRef.unset(FS_S3A_COMMITTER_NAME);
-assertFactoryCreatesExpectedCommitter(FileOutputCommitter.class);
-  }
+  private final String fsCommitterName;

Review Comment:
   done





> ITestS3ACom

Re: [PR] HADOOP-19189. ITestS3ACommitterFactory failing [hadoop]

2024-06-06 Thread via GitHub


steveloughran commented on code in PR #6857:
URL: https://github.com/apache/hadoop/pull/6857#discussion_r1630121834


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java:
##
@@ -72,121 +85,141 @@ public class ITestS3ACommitterFactory extends 
AbstractCommitITest {
* Parameterized list of bindings of committer name in config file to
* expected class instantiated.
*/
-  private static final Object[][] bindings = {
-  {COMMITTER_NAME_FILE, FileOutputCommitter.class},
-  {COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class},
-  {COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class},
-  {InternalCommitterConstants.COMMITTER_NAME_STAGING,
-  StagingCommitter.class},
-  {COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class}
+  private static final Object[][] BINDINGS = {

Review Comment:
   clarified in doc comments



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/commit/ITestS3ACommitterFactory.java:
##
@@ -72,121 +85,141 @@ public class ITestS3ACommitterFactory extends 
AbstractCommitITest {
* Parameterized list of bindings of committer name in config file to
* expected class instantiated.
*/
-  private static final Object[][] bindings = {
-  {COMMITTER_NAME_FILE, FileOutputCommitter.class},
-  {COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class},
-  {COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class},
-  {InternalCommitterConstants.COMMITTER_NAME_STAGING,
-  StagingCommitter.class},
-  {COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class}
+  private static final Object[][] BINDINGS = {
+  {"", "", FileOutputCommitter.class, "Default Binding"},
+  {COMMITTER_NAME_FILE, "", FileOutputCommitter.class, "File committer in 
FS"},
+  {COMMITTER_NAME_PARTITIONED, "", PartitionedStagingCommitter.class,
+  "partitoned committer in FS"},
+  {COMMITTER_NAME_STAGING, "", StagingCommitter.class, "staging committer 
in FS"},
+  {COMMITTER_NAME_MAGIC, "", MagicS3GuardCommitter.class, "magic committer 
in FS"},
+  {COMMITTER_NAME_DIRECTORY, "", DirectoryStagingCommitter.class, "Dir 
committer in FS"},
+  {INVALID_NAME, "", null, "invalid committer in FS"},
+
+  {"", COMMITTER_NAME_FILE, FileOutputCommitter.class, "File committer in 
task"},
+  {"", COMMITTER_NAME_PARTITIONED, PartitionedStagingCommitter.class,
+  "partioned committer in task"},
+  {"", COMMITTER_NAME_STAGING, StagingCommitter.class, "staging committer 
in task"},
+  {"", COMMITTER_NAME_MAGIC, MagicS3GuardCommitter.class, "magic committer 
in task"},
+  {"", COMMITTER_NAME_DIRECTORY, DirectoryStagingCommitter.class, "Dir 
committer in task"},
+  {"", INVALID_NAME, null, "invalid committer in task"},
   };
 
   /**
-   * This is a ref to the FS conf, so changes here are visible
-   * to callers querying the FS config.
+   * Test array for parameterized test runs.
+   *
+   * @return the committer binding for this run.
*/
-  private Configuration filesystemConfRef;
-
-  private Configuration taskConfRef;
-
-  @Override
-  public void setup() throws Exception {
-super.setup();
-jobId = randomJobId();
-attempt0 = "attempt_" + jobId + "_m_00_0";
-taskAttempt0 = TaskAttemptID.forName(attempt0);
-
-outDir = path(getMethodName());
-factory = new S3ACommitterFactory();
-Configuration conf = new Configuration();
-conf.set(FileOutputFormat.OUTDIR, outDir.toUri().toString());
-conf.set(MRJobConfig.TASK_ATTEMPT_ID, attempt0);
-conf.setInt(MRJobConfig.APPLICATION_ATTEMPT_ID, 1);
-filesystemConfRef = getFileSystem().getConf();
-tContext = new TaskAttemptContextImpl(conf, taskAttempt0);
-taskConfRef = tContext.getConfiguration();
-  }
-
-  @Test
-  public void testEverything() throws Throwable {
-testImplicitFileBinding();
-testBindingsInTask();
-testBindingsInFSConfig();
-testInvalidFileBinding();
-testInvalidTaskBinding();
+  @Parameterized.Parameters(name = "{3}-fs=[{0}]-task=[{1}]-[{2}]")
+  public static Collection params() {
+return Arrays.asList(BINDINGS);
   }
 
   /**
-   * Verify that if all config options are unset, the FileOutputCommitter
-   *
-   * is returned.
+   * Name of committer to set in fs config. If "" do not set one.
*/
-  public void testImplicitFileBinding() throws Throwable {
-taskConfRef.unset(FS_S3A_COMMITTER_NAME);
-filesystemConfRef.unset(FS_S3A_COMMITTER_NAME);
-assertFactoryCreatesExpectedCommitter(FileOutputCommitter.class);
-  }
+  private final String fsCommitterName;

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please c

Re: [PR] HADOOP-19194:Add test to find unshaded dependencies in the aws sdk [hadoop]

2024-06-06 Thread via GitHub


mukund-thakur commented on PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#issuecomment-2153104078

   > There are currently 4362 unshaded classes as per the test, the number 
reduces to 13 when we migrate to 2.25.53
   
   Have you added a comment in an older SDK issue or created a new issue?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19194) Add test to find unshaded dependencies in the aws sdk

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852894#comment-17852894
 ] 

ASF GitHub Bot commented on HADOOP-19194:
-

mukund-thakur commented on PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#issuecomment-2153104078

   > There are currently 4362 unshaded classes as per the test, the number 
reduces to 13 when we migrate to 2.25.53
   
   Have you added a comment in an older SDK issue or created a new issue?
   




> Add test to find unshaded dependencies in the aws sdk
> -
>
> Key: HADOOP-19194
> URL: https://issues.apache.org/jira/browse/HADOOP-19194
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Harshit Gupta
>Assignee: Harshit Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Write a test to assess the aws sdk for unshaded artefacts on the class path 
> which might cause deployment failures. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852870#comment-17852870
 ] 

Steve Loughran edited comment on HADOOP-19196 at 6/6/24 5:01 PM:
-

we need a contract test for this.


code to trigger this
{code}
io.bulkDelete_delete(fs, path, Lists.newArrayList(path)))
{code}

{code}
java.lang.IllegalArgumentException: Path 
file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/zfms3Bqvmq/testOpenLocalFile
 is not under the base path 
file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/zfms3Bqvmq/testOpenLocalFile

at 
org.apache.hadoop.util.Preconditions.checkArgument(Preconditions.java:213)
at 
org.apache.hadoop.fs.BulkDeleteUtils.lambda$validateBulkDeletePaths$0(BulkDeleteUtils.java:45)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at 
org.apache.hadoop.fs.BulkDeleteUtils.validateBulkDeletePaths(BulkDeleteUtils.java:43)
at 
org.apache.hadoop.fs.impl.DefaultBulkDeleteOperation.bulkDelete(DefaultBulkDeleteOperation

{code}



was (Author: ste...@apache.org):
we need a contract test for this.
{code}
java.lang.IllegalArgumentException: Path 
file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/zfms3Bqvmq/testOpenLocalFile
 is not under the base path 
file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/zfms3Bqvmq/testOpenLocalFile

at 
org.apache.hadoop.util.Preconditions.checkArgument(Preconditions.java:213)
at 
org.apache.hadoop.fs.BulkDeleteUtils.lambda$validateBulkDeletePaths$0(BulkDeleteUtils.java:45)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at 
org.apache.hadoop.fs.BulkDeleteUtils.validateBulkDeletePaths(BulkDeleteUtils.java:43)
at 
org.apache.hadoop.fs.impl.DefaultBulkDeleteOperation.bulkDelete(DefaultBulkDeleteOperation

{code}


> Bulk delete api doesn't take the path to delete as the base path
> 
>
> Key: HADOOP-19196
> URL: https://issues.apache.org/jira/browse/HADOOP-19196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>
> If you use the path of the file you intend to delete as the base path, you 
> get an error. This is because the validation requires the list to be of 
> children, but the base path itself should be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852870#comment-17852870
 ] 

Steve Loughran commented on HADOOP-19196:
-

we need a contract test for this.
{code}
java.lang.IllegalArgumentException: Path 
file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/zfms3Bqvmq/testOpenLocalFile
 is not under the base path 
file:/Users/stevel/Projects/hadoop-trunk/hadoop-common-project/hadoop-common/target/test/data/zfms3Bqvmq/testOpenLocalFile

at 
org.apache.hadoop.util.Preconditions.checkArgument(Preconditions.java:213)
at 
org.apache.hadoop.fs.BulkDeleteUtils.lambda$validateBulkDeletePaths$0(BulkDeleteUtils.java:45)
at java.util.ArrayList.forEach(ArrayList.java:1259)
at 
org.apache.hadoop.fs.BulkDeleteUtils.validateBulkDeletePaths(BulkDeleteUtils.java:43)
at 
org.apache.hadoop.fs.impl.DefaultBulkDeleteOperation.bulkDelete(DefaultBulkDeleteOperation

{code}


> Bulk delete api doesn't take the path to delete as the base path
> 
>
> Key: HADOOP-19196
> URL: https://issues.apache.org/jira/browse/HADOOP-19196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Steve Loughran
>Priority: Minor
>
> If you use the path of the file you intend to delete as the base path, you 
> get an error. This is because the validation requires the list to be of 
> children, but the base path itself should be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-19196:
---

Assignee: Mukund Thakur

> Bulk delete api doesn't take the path to delete as the base path
> 
>
> Key: HADOOP-19196
> URL: https://issues.apache.org/jira/browse/HADOOP-19196
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Steve Loughran
>Assignee: Mukund Thakur
>Priority: Minor
>
> If you use the path of the file you intend to delete as the base path, you 
> get an error. This is because the validation requires the list to be of 
> children, but the base path itself should be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19196) Bulk delete api doesn't take the path to delete as the base path

2024-06-06 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19196:
---

 Summary: Bulk delete api doesn't take the path to delete as the 
base path
 Key: HADOOP-19196
 URL: https://issues.apache.org/jira/browse/HADOOP-19196
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 3.5.0, 3.4.1
Reporter: Steve Loughran


If you use the path of the file you intend to delete as the base path, you get 
an error. This is because the validation requires the list to be of children, 
but the base path itself should be valid.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19194) Add test to find unshaded dependencies in the aws sdk

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852857#comment-17852857
 ] 

ASF GitHub Bot commented on HADOOP-19194:
-

steveloughran commented on code in PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#discussion_r1629843102


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java:
##
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.sdk;
+import org.junit.Test;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
+import java.util.jar.JarEntry;
+import java.util.jar.JarFile;
+
+import static org.junit.Assert.assertNotEquals;
+
+/**
+ * Tests to verify AWS SDK based issues like duplicated shaded classes and 
others
+ */
+public class TestAWSV2SDK extends AbstractHadoopTestBase {
+
+Logger LOG = LoggerFactory.getLogger(this.getClass().getName());
+@Test
+public void testShadedClasses() throws IOException {
+String allClassPath = System.getProperty("java.class.path");
+LOG.debug("Current classpath:{}", allClassPath);
+String[] classPaths = allClassPath.split(File.pathSeparator);
+String v2ClassPath = null;
+for(String classPath : classPaths){
+//Checking for only version 2.x sdk here
+if (classPath.contains("awssdk/bundle/2")) {
+v2ClassPath = classPath;
+break;
+}
+}
+assertNotEquals("AWS V2 SDK should be present on the classpath",

Review Comment:
   use AssertJ assertThat().isNotNull()



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java:
##
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.sdk;
+import org.junit.Test;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
+import java.util.jar.JarEntry;
+import java.util.jar.JarFile;
+
+import static org.junit.Assert.assertNotEquals;
+
+/**
+ * Tests to verify AWS SDK based issues like duplicated shaded classes and 
others
+ */
+public class TestAWSV2SDK extends AbstractHadoopTestBase {
+
+Logger LOG = LoggerFactory.getLogger(this.getClass().getName());
+@Test
+public void testShadedClasses() throws IOException {
+String allClassPath = System.getProperty("java.class.path");
+LOG.debug("Current classpath:{}", allClassPath);
+String[] classPaths = allClassPath.split(File.pathSeparator);
+String v2ClassPath = null;
+for(String classPath : classPaths){
+//Checking for only version 2.x sdk here
+if (classPath.contains("awssdk/bundle/2")) {
+v2ClassPath = classPath;
+break;
+}
+}
+assertNotEquals("AWS V2 SDK should be present on the classpath",
+v2ClassPath, null);
+List listOfV2SdkClasses = 
getCl

Re: [PR] HDFS-17544. [ARR] The router client rpc protocol supports asynchrony. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise closed pull request #6870: HDFS-17544. [ARR] The router client rpc 
protocol supports asynchrony.
URL: https://github.com/apache/hadoop/pull/6870


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19194:Add test to find unshaded dependencies in the aws sdk [hadoop]

2024-06-06 Thread via GitHub


steveloughran commented on code in PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#discussion_r1629843102


##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java:
##
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.sdk;
+import org.junit.Test;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
+import java.util.jar.JarEntry;
+import java.util.jar.JarFile;
+
+import static org.junit.Assert.assertNotEquals;
+
+/**
+ * Tests to verify AWS SDK based issues like duplicated shaded classes and 
others
+ */
+public class TestAWSV2SDK extends AbstractHadoopTestBase {
+
+Logger LOG = LoggerFactory.getLogger(this.getClass().getName());
+@Test
+public void testShadedClasses() throws IOException {
+String allClassPath = System.getProperty("java.class.path");
+LOG.debug("Current classpath:{}", allClassPath);
+String[] classPaths = allClassPath.split(File.pathSeparator);
+String v2ClassPath = null;
+for(String classPath : classPaths){
+//Checking for only version 2.x sdk here
+if (classPath.contains("awssdk/bundle/2")) {
+v2ClassPath = classPath;
+break;
+}
+}
+assertNotEquals("AWS V2 SDK should be present on the classpath",

Review Comment:
   use AssertJ assertThat().isNotNull()



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/sdk/TestAWSV2SDK.java:
##
@@ -0,0 +1,85 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *  or more contributor license agreements.  See the NOTICE file
+ *  distributed with this work for additional information
+ *  regarding copyright ownership.  The ASF licenses this file
+ *  to you under the Apache License, Version 2.0 (the
+ *  "License"); you may not use this file except in compliance
+ *  with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *  Unless required by applicable law or agreed to in writing, software
+ *  distributed under the License is distributed on an "AS IS" BASIS,
+ *  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ *  See the License for the specific language governing permissions and
+ *  limitations under the License.
+ */
+
+package org.apache.hadoop.fs.sdk;
+import org.junit.Test;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.File;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Enumeration;
+import java.util.List;
+import java.util.jar.JarEntry;
+import java.util.jar.JarFile;
+
+import static org.junit.Assert.assertNotEquals;
+
+/**
+ * Tests to verify AWS SDK based issues like duplicated shaded classes and 
others
+ */
+public class TestAWSV2SDK extends AbstractHadoopTestBase {
+
+Logger LOG = LoggerFactory.getLogger(this.getClass().getName());
+@Test
+public void testShadedClasses() throws IOException {
+String allClassPath = System.getProperty("java.class.path");
+LOG.debug("Current classpath:{}", allClassPath);
+String[] classPaths = allClassPath.split(File.pathSeparator);
+String v2ClassPath = null;
+for(String classPath : classPaths){
+//Checking for only version 2.x sdk here
+if (classPath.contains("awssdk/bundle/2")) {
+v2ClassPath = classPath;
+break;
+}
+}
+assertNotEquals("AWS V2 SDK should be present on the classpath",
+v2ClassPath, null);
+List listOfV2SdkClasses = 
getClassNamesFromJarFile(v2ClassPath);
+String awsSdkPrefix = "software/amazon/awssdk";
+List unshadedClasses = new ArrayList<>();
+for(String awsSdkClass : listOfV2SdkClasses){
+if (!awsSdkClass.startsWith(awsSdkPrefix)) {
+ 

Re: [PR] HDFS-17543. [ARR]: AsyncUtil makes asynchronous code more concise and easier. [hadoop]

2024-06-06 Thread via GitHub


hadoop-yetus commented on PR #6868:
URL: https://github.com/apache/hadoop/pull/6868#issuecomment-2152937859

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 27s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ HDFS-17531 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 46s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  HDFS-17531 passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  HDFS-17531 passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  HDFS-17531 passed  |
   | +1 :green_heart: |  shadedclient  |  20m 38s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 14s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6868/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  26m 39s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 109m 25s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6868/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6868 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux dc276c982c91 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | HDFS-17531 / 518b1099e0077b58fdb0b519396ce72fceba8be3 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6868/1/testReport/ |
   | Max. process+thread count | 4584 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6868/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message w

[PR] HDFS-17545. [ARR] router async rpc client. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise opened a new pull request, #6871:
URL: https://github.com/apache/hadoop/pull/6871

   
   
   ### Description of PR
   please see: https://issues.apache.org/jira/browse/HDFS-17545
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] Hdfs-17544. [ARR] The router client rpc protocol supports asynchrony. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise opened a new pull request, #6870:
URL: https://github.com/apache/hadoop/pull/6870

   
   
   ### Description of PR
   Please see :  https://issues.apache.org/jira/browse/HDFS-17544
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hdfs-17544. [ARR] The router client rpc protocol supports asynchrony. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise closed pull request #6869: Hdfs-17544. [ARR] The router client rpc 
protocol supports asynchrony.
URL: https://github.com/apache/hadoop/pull/6869


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] Hdfs-17544. [ARR] The router client rpc protocol supports asynchrony. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise opened a new pull request, #6869:
URL: https://github.com/apache/hadoop/pull/6869

   
   
   ### Description of PR
   please see: https://issues.apache.org/jira/browse/HDFS-17544
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HDFS-17543. [ARR]: AsyncUtil makes asynchronous code more concise and easier. [hadoop]

2024-06-06 Thread via GitHub


KeeProMise opened a new pull request, #6868:
URL: https://github.com/apache/hadoop/pull/6868

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19194) Add test to find unshaded dependencies in the aws sdk

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852807#comment-17852807
 ] 

ASF GitHub Bot commented on HADOOP-19194:
-

HarshitGupta11 commented on PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#issuecomment-2152658341

   @steveloughran @mukund-thakur can I get a review on this? There are 
currently 4362 unshaded classes as per the test, the number reduces to 13 when 
we migrate to 2.25.53




> Add test to find unshaded dependencies in the aws sdk
> -
>
> Key: HADOOP-19194
> URL: https://issues.apache.org/jira/browse/HADOOP-19194
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Harshit Gupta
>Assignee: Harshit Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Write a test to assess the aws sdk for unshaded artefacts on the class path 
> which might cause deployment failures. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19194:Add test to find unshaded dependencies in the aws sdk [hadoop]

2024-06-06 Thread via GitHub


HarshitGupta11 commented on PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#issuecomment-2152658341

   @steveloughran @mukund-thakur can I get a review on this? There are 
currently 4362 unshaded classes as per the test, the number reduces to 13 when 
we migrate to 2.25.53


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19195) Upgrade aws sdk v2 to 2.25.53

2024-06-06 Thread Harshit Gupta (Jira)
Harshit Gupta created HADOOP-19195:
--

 Summary: Upgrade aws sdk v2 to 2.25.53
 Key: HADOOP-19195
 URL: https://issues.apache.org/jira/browse/HADOOP-19195
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.5.0, 3.4.1
Reporter: Harshit Gupta
Assignee: Harshit Gupta
 Fix For: 3.5.0


Upgrade aws sdk v2 to 2.25.53



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19194) Add test to find unshaded dependencies in the aws sdk

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852777#comment-17852777
 ] 

ASF GitHub Bot commented on HADOOP-19194:
-

hadoop-yetus commented on PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#issuecomment-2152382730

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 11s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6865/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 44 new + 0 unchanged - 0 fixed 
= 44 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 23s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  86m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6865/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6865 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux e959c18d9692 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aef7e84849b52092854f53d3de19e3b40257b24a |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6865/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   |

Re: [PR] HADOOP-19194:Add test to find unshaded dependencies in the aws sdk [hadoop]

2024-06-06 Thread via GitHub


hadoop-yetus commented on PR #6865:
URL: https://github.com/apache/hadoop/pull/6865#issuecomment-2152382730

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 11s | 
[/results-checkstyle-hadoop-tools_hadoop-aws.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6865/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-aws.txt)
 |  hadoop-tools/hadoop-aws: The patch generated 44 new + 0 unchanged - 0 fixed 
= 44 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m  5s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 23s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  86m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6865/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6865 |
   | Optional Tests | dupname asflicense mvnsite codespell detsecrets 
markdownlint compile javac javadoc mvninstall unit shadedclient spotbugs 
checkstyle |
   | uname | Linux e959c18d9692 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / aef7e84849b52092854f53d3de19e3b40257b24a |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6865/1/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6865/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically gen

[jira] [Commented] (HADOOP-19114) upgrade to commons-compress 1.26.1 due to cves

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852766#comment-17852766
 ] 

ASF GitHub Bot commented on HADOOP-19114:
-

pjfanning opened a new pull request, #6867:
URL: https://github.com/apache/hadoop/pull/6867

   This addresses two CVEs triggered by malformed archives
   
   Important: Denial of Service CVE-2024-25710
   Moderate: Denial of Service CVE-2024-26308
   
   Contributed by PJ Fanning
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> upgrade to commons-compress 1.26.1 due to cves
> --
>
> Key: HADOOP-19114
> URL: https://issues.apache.org/jira/browse/HADOOP-19114
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, CVE
>Affects Versions: 3.4.0
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> 2 recent CVEs fixed - 
> https://mvnrepository.com/artifact/org.apache.commons/commons-compress
> Important: Denial of Service CVE-2024-25710
> Moderate: Denial of Service CVE-2024-26308



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19114. Upgrade to commons-compress 1.26.1 due to CVEs. (#6636) [hadoop]

2024-06-06 Thread via GitHub


pjfanning opened a new pull request, #6867:
URL: https://github.com/apache/hadoop/pull/6867

   This addresses two CVEs triggered by malformed archives
   
   Important: Denial of Service CVE-2024-25710
   Moderate: Denial of Service CVE-2024-26308
   
   Contributed by PJ Fanning
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19154) upgrade bouncy castle to 1.78.1 due to CVEs

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852763#comment-17852763
 ] 

ASF GitHub Bot commented on HADOOP-19154:
-

pjfanning opened a new pull request, #6866:
URL: https://github.com/apache/hadoop/pull/6866

   Addresses
   
   * CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
parameters can cause high CPU usage during parameter evaluation.
   * CVE-2024-30171 - Possible timing based leakage in RSA based handshakes due 
to exception processing eliminated.
   * CVE-2024-30172 - Crafted signature and public key can be used to trigger 
an infinite loop in the Ed25519 verification code.
   * CVE-2024-301XX - When endpoint identification is enabled and an SSL socket 
is not created with an explicit hostname (as happens with HttpsURLConnection), 
hostname verification could be performed against a DNS-resolved IP address.
   
   Contributed by PJ Fanning
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> upgrade bouncy castle to 1.78.1 due to CVEs
> ---
>
> Key: HADOOP-19154
> URL: https://issues.apache.org/jira/browse/HADOOP-19154
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.4.0, 3.3.6
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> [https://www.bouncycastle.org/releasenotes.html#r1rv78]
> There is a v1.78.1 release but no notes for it yet.
> For v1.78
> h3. 2.1.5 Security Advisories.
> Release 1.78 deals with the following CVEs:
>  * CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
> parameters can cause high CPU usage during parameter evaluation.
>  * CVE-2024-30171 - Possible timing based leakage in RSA based handshakes due 
> to exception processing eliminated.
>  * CVE-2024-30172 - Crafted signature and public key can be used to trigger 
> an infinite loop in the Ed25519 verification code.
>  * CVE-2024-301XX - When endpoint identification is enabled and an SSL socket 
> is not created with an explicit hostname (as happens with 
> HttpsURLConnection), hostname verification could be performed against a 
> DNS-resolved IP address. This has been fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19154. Upgrade bouncycastle to 1.78.1 due to CVEs (#6755) [hadoop]

2024-06-06 Thread via GitHub


pjfanning opened a new pull request, #6866:
URL: https://github.com/apache/hadoop/pull/6866

   Addresses
   
   * CVE-2024-29857 - Importing an EC certificate with specially crafted F2m 
parameters can cause high CPU usage during parameter evaluation.
   * CVE-2024-30171 - Possible timing based leakage in RSA based handshakes due 
to exception processing eliminated.
   * CVE-2024-30172 - Crafted signature and public key can be used to trigger 
an infinite loop in the Ed25519 verification code.
   * CVE-2024-301XX - When endpoint identification is enabled and an SSL socket 
is not created with an explicit hostname (as happens with HttpsURLConnection), 
hostname verification could be performed against a DNS-resolved IP address.
   
   Contributed by PJ Fanning
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [x] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19194) Add test to find unshaded dependencies in the aws sdk

2024-06-06 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852743#comment-17852743
 ] 

ASF GitHub Bot commented on HADOOP-19194:
-

HarshitGupta11 opened a new pull request, #6865:
URL: https://github.com/apache/hadoop/pull/6865

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   Local testing via test.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> Add test to find unshaded dependencies in the aws sdk
> -
>
> Key: HADOOP-19194
> URL: https://issues.apache.org/jira/browse/HADOOP-19194
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Harshit Gupta
>Assignee: Harshit Gupta
>Priority: Major
> Fix For: 3.4.1
>
>
> Write a test to assess the aws sdk for unshaded artefacts on the class path 
> which might cause deployment failures. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19194) Add test to find unshaded dependencies in the aws sdk

2024-06-06 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19194:

Labels: pull-request-available  (was: )

> Add test to find unshaded dependencies in the aws sdk
> -
>
> Key: HADOOP-19194
> URL: https://issues.apache.org/jira/browse/HADOOP-19194
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Harshit Gupta
>Assignee: Harshit Gupta
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>
> Write a test to assess the aws sdk for unshaded artefacts on the class path 
> which might cause deployment failures. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19194:Add test to find unshaded dependencies in the aws sdk [hadoop]

2024-06-06 Thread via GitHub


HarshitGupta11 opened a new pull request, #6865:
URL: https://github.com/apache/hadoop/pull/6865

   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   Local testing via test.
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19194) Add test to find unshaded dependencies in the aws sdk

2024-06-06 Thread Harshit Gupta (Jira)
Harshit Gupta created HADOOP-19194:
--

 Summary: Add test to find unshaded dependencies in the aws sdk
 Key: HADOOP-19194
 URL: https://issues.apache.org/jira/browse/HADOOP-19194
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Harshit Gupta
Assignee: Harshit Gupta
 Fix For: 3.4.1


Write a test to assess the aws sdk for unshaded artefacts on the class path 
which might cause deployment failures. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17528. FsImageValidation: set txid when saving a new image [hadoop]

2024-06-06 Thread via GitHub


szetszwo commented on PR #6828:
URL: https://github.com/apache/hadoop/pull/6828#issuecomment-2151810390

   The last few lines of the Jenkins build before failure:
   ```
   [2024-06-03T20:14:42.551Z] cd 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-6828/ubuntu-focal/src/hadoop-hdfs-project/hadoop-hdfs
   [2024-06-03T20:14:42.551Z] /usr/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-6828/yetus-m2/hadoop-trunk-patch-0
 -Dsurefire.rerunFailingTestsCount=2 -Pparallel-tests -P!shelltest -Pnative 
-Drequire.fuse -Drequire.openssl -Drequire.snappy -Drequire.valgrind 
-Drequire.zstd -Drequire.test.libhadoop -Pyarn-ui clean test -fae > 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-6828/ubuntu-focal/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 2>&1
   [2024-06-05T05:43:40.181Z] wrapper script does not seem to be touching the 
log file in 
/home/jenkins/jenkins-home/workspace/hadoop-multibranch_PR-6828@tmp/durable-db261340
   [2024-06-05T05:43:40.181Z] (JENKINS-48300: if on an extremely laggy 
filesystem, consider 
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=86400)
   script returned exit code -1
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17534 RBF: Support leader follower mode for multiple subclusters [hadoop]

2024-06-06 Thread via GitHub


hadoop-yetus commented on PR #6861:
URL: https://github.com/apache/hadoop/pull/6861#issuecomment-2151728419

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   6m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  cc  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  cc  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 13s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6861/3/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 3 new + 1 
unchanged - 0 fixed = 4 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  26m 58s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 114m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.45 ServerAPI=1.45 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6861/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6861 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux a84655c37962 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 
09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c741dc2d3af22b8e68a48be45001d7373206ca32 |
   | Default Java | Private Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.23+9-post-Ubuntu-1ubuntu120.04.2 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_412-8u412-ga-1~20.04.1-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6861/3/testReport/ |
   | Max. process+thread count | 4298 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: 
hadoop-hdfs-project/hadoop-hdfs-rbf |
   | Console 

[jira] [Commented] (HADOOP-18929) Build failure while trying to create apache 3.3.7 release locally.

2024-06-06 Thread Kanaka Kumar Avvaru (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17852667#comment-17852667
 ] 

Kanaka Kumar Avvaru commented on HADOOP-18929:
--

There are quite few third party jar fixes merged in branch 3.3  around last 1 
year after 3.3.6 in Jun 2023

Is there any 3.3.x  next release planned soon ?

> Build failure while trying to create apache 3.3.7 release locally.
> --
>
> Key: HADOOP-18929
> URL: https://issues.apache.org/jira/browse/HADOOP-18929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: Mukund Thakur
>Assignee: PJ Fanning
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>
> {noformat}
> [ESC[1;34mINFOESC[m] ESC[1m---< 
> ESC[0;36morg.apache.hadoop:hadoop-client-check-test-invariantsESC[0;1m 
> >ESC[m
> [ESC[1;34mINFOESC[m] ESC[1mBuilding Apache Hadoop Client Packaging Invariants 
> for Test 3.3.9-SNAPSHOT [105/111]ESC[m
> [ESC[1;34mINFOESC[m] ESC[1m[ pom 
> ]-ESC[m
> [ESC[1;34mINFOESC[m] 
> [ESC[1;34mINFOESC[m] ESC[1m--- 
> ESC[0;32mmaven-enforcer-plugin:3.0.0-M1:enforceESC[m 
> ESC[1m(enforce-banned-dependencies)ESC[m @ 
> ESC[36mhadoop-client-check-test-invariantsESC[0;1m ---ESC[m
> [ESC[1;34mINFOESC[m] Adding ignorable dependency: 
> org.apache.hadoop:hadoop-annotations:null
> [ESC[1;34mINFOESC[m]   Adding ignore: *
> [ESC[1;33mWARNINGESC[m] Rule 1: 
> org.apache.maven.plugins.enforcer.BanDuplicateClasses failed with message:
> Duplicate classes found:
>   Found in:
>     org.apache.hadoop:hadoop-client-minicluster:jar:3.3.9-SNAPSHOT:compile
>     org.apache.hadoop:hadoop-client-runtime:jar:3.3.9-SNAPSHOT:compile
>   Duplicate classes:
>     META-INF/versions/9/module-info.class
> {noformat}
> CC [~ste...@apache.org]  [~weichu] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org