Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1993626962

   Addendum PR: #6624


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825910#comment-17825910
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani opened a new pull request, #6624:
URL: https://github.com/apache/hadoop/pull/6624

   Jira: HADOOP-19066
   
   Addendum to PR: #6539




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19066. Run FIPS test for valid bucket locations (ADDENDUM) [hadoop]

2024-03-12 Thread via GitHub


virajjasani opened a new pull request, #6624:
URL: https://github.com/apache/hadoop/pull/6624

   Jira: HADOOP-19066
   
   Addendum to PR: #6539


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] [ABFS] [Backport 3.4] Back Merging PRs from trunk to Branch 3.4 [hadoop]

2024-03-12 Thread via GitHub


anujmodi2021 commented on PR #6611:
URL: https://github.com/apache/hadoop/pull/6611#issuecomment-1993585491

   > I think it will depend on the fact first commit works as it its and is not 
dependent on second one. We need to merge these separately such that if we have 
to revert in future we can revert only one. @anujmodi2021 there shouldn't be 
any problem with that right?
   
   Right..
   This shouldn't be a problem. Both the changes are independent of each other. 
Its just that some commons files are modified.
   If there is a need to revert any one of them, then we might have to resolve 
some merge conflicts. Nothing should break in production.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11660. Fix huge performance regression for SingleConstraintAppPlacementAllocator [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6623:
URL: https://github.com/apache/hadoop/pull/6623#issuecomment-1993566244

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  86m 39s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 170m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6623/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6623 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5dcf70804e69 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3cd69ab870947d5b597703b2658957e9fef4cf52 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6623/2/testReport/ |
   | Max. process+thread count | 939 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6623/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To 

[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825904#comment-17825904
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1993556163

   Oh wait, FIPS is only for US and Canada endpoints. The above error is legit.
   
   Let me provide an addendum to ignore the test if non-US or Canada endpoints 
are used.




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1993556163

   Oh wait, FIPS is only for US and Canada endpoints. The above error is legit.
   
   Let me provide an addendum to ignore the test if non-US or Canada endpoints 
are used.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] [ABFS] [Backport 3.4] Back Merging PRs from trunk to Branch 3.4 [hadoop]

2024-03-12 Thread via GitHub


mukund-thakur commented on PR #6611:
URL: https://github.com/apache/hadoop/pull/6611#issuecomment-1993499010

   Cp'ed the first one, compiled and ran azure tests. It went fine. Will do the 
same for 2nd one and combine the checkstyle in 2nd one. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17380. FsImageValidation: remove inaccessible nodes. [hadoop]

2024-03-12 Thread via GitHub


szetszwo commented on PR #6549:
URL: https://github.com/apache/hadoop/pull/6549#issuecomment-1993371108

   > When you talk about inaccessible inode, do you mean NameNode unexpected 
logic cause some inodes are unreachable?
   
   Yes.  The inaccessible inodes may be caused by a bug like HDFS-17045.
   
   > Recover from one earlier checkpoint will not loss data, it will keep both 
fsimage and all editlog util the latest transaction.
   
   If it was caused by a bug, replaying the edit log will reproduce to the same 
corrupted fsimage.
   
   > ... Will involve to review once understand what it will improve.  ...
   
   @Hexiaoqiao, please take a look the current change, which updates the 
`FsImageValidation` tool.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


saxenapranav commented on code in PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#discussion_r1522432071


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -208,6 +240,8 @@ public AbfsClient(final URL baseUrl, final 
SharedKeyCredentials sharedKeyCredent
 
   @Override
   public void close() throws IOException {
+runningTimerTask.cancel();

Review Comment:
   Lets add a null check in case runningTimerTask is not spawned before client 
close.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1693,6 +1734,106 @@ protected AccessTokenProvider getTokenProvider() {
 return tokenProvider;
   }
 
+  /**
+   * Retrieves a TracingContext object configured for metric tracking.
+   * This method creates a TracingContext object with the validated client 
correlation ID,
+   * the host name of the local machine (or "UnknownHost" if unable to 
determine),
+   * the file system operation type set to GET_ATTR, and additional 
configuration parameters
+   * for metric tracking.
+   * The TracingContext is intended for use in tracking metrics related to 
Azure Blob FileSystem (ABFS) operations.
+   *
+   * @return A TracingContext object configured for metric tracking.
+   */
+  private TracingContext getMetricTracingContext() {
+String hostName;
+try {
+  hostName = InetAddress.getLocalHost().getHostName();
+} catch (UnknownHostException e) {
+  hostName = "UnknownHost";
+}
+return new TracingContext(TracingContext.validateClientCorrelationID(
+abfsConfiguration.getClientCorrelationId()),
+hostName, FSOperationType.GET_ATTR, true,
+abfsConfiguration.getTracingHeaderFormat(),
+null, abfsCounters.toString());
+  }
+
+  /**
+   * Synchronized method to suspend or resume timer.
+   * @param timerFunctionality resume or suspend.
+   * @param timerTask The timertask object.
+   * @return true or false.
+   */
+  synchronized boolean timerOrchestrator(TimerFunctionality timerFunctionality,
+  TimerTask timerTask) {
+this.runningTimerTask = timerTask;

Review Comment:
   lets set `runningTaskTimer` in the constructor of `TimerTask`. Reason being, 
this method would be called after an initTime given in `timer.schedule`, and 
client may get closed before it.



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1411,6 +1448,97 @@ protected AccessTokenProvider getTokenProvider() {
 return tokenProvider;
   }
 
+  public AzureBlobFileSystem getMetricFilesystem() throws IOException {
+if (metricFs == null) {
+  try {
+Configuration metricConfig = abfsConfiguration.getRawConfiguration();
+String metricAccountKey = 
metricConfig.get(FS_AZURE_METRIC_ACCOUNT_KEY);
+final String abfsMetricUrl = metricConfig.get(FS_AZURE_METRIC_URI);
+if (abfsMetricUrl == null) {
+  return null;
+}
+metricConfig.set(FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME, metricAccountKey);
+metricConfig.set(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
"false");
+URI metricUri;
+metricUri = new URI(FileSystemUriSchemes.ABFS_SCHEME, abfsMetricUrl, 
null, null, null);
+metricFs = (AzureBlobFileSystem) FileSystem.newInstance(metricUri, 
metricConfig);
+  } catch (AzureBlobFileSystemException | URISyntaxException ex) {
+//do nothing
+  }
+}
+return metricFs;
+  }
+
+  private TracingContext getMetricTracingContext() {
+String hostName;
+try {
+  hostName = InetAddress.getLocalHost().getHostName();
+} catch (UnknownHostException e) {
+  hostName = "UnknownHost";
+}
+return new TracingContext(TracingContext.validateClientCorrelationID(
+abfsConfiguration.getClientCorrelationId()),
+hostName, FSOperationType.GET_ATTR, true,
+abfsConfiguration.getTracingHeaderFormat(),
+null, abfsCounters.toString());
+  }
+
+  /**
+   * Synchronized method to suspend or resume timer.
+   * @param timerFunctionality resume or suspend.
+   * @param timerTask The timertask object.
+   * @return true or false.
+   */
+  synchronized boolean timerOrchestrator(TimerFunctionality timerFunctionality,

Review Comment:
   Looks like you have taken comment in `AbfsClientThrottlingAnalyser` :). This 
comment was more for this PR change, great you have taken at other place as 
well, but we can think if want that file change in this scope.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


saxenapranav commented on code in PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#discussion_r1510679721


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1411,6 +1448,97 @@ protected AccessTokenProvider getTokenProvider() {
 return tokenProvider;
   }
 
+  public AzureBlobFileSystem getMetricFilesystem() throws IOException {
+if (metricFs == null) {
+  try {
+Configuration metricConfig = abfsConfiguration.getRawConfiguration();
+String metricAccountKey = 
metricConfig.get(FS_AZURE_METRIC_ACCOUNT_KEY);
+final String abfsMetricUrl = metricConfig.get(FS_AZURE_METRIC_URI);
+if (abfsMetricUrl == null) {
+  return null;
+}
+metricConfig.set(FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME, metricAccountKey);
+metricConfig.set(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
"false");
+URI metricUri;
+metricUri = new URI(FileSystemUriSchemes.ABFS_SCHEME, abfsMetricUrl, 
null, null, null);
+metricFs = (AzureBlobFileSystem) FileSystem.newInstance(metricUri, 
metricConfig);
+  } catch (AzureBlobFileSystemException | URISyntaxException ex) {
+//do nothing
+  }
+}
+return metricFs;
+  }
+
+  private TracingContext getMetricTracingContext() {
+String hostName;
+try {
+  hostName = InetAddress.getLocalHost().getHostName();
+} catch (UnknownHostException e) {
+  hostName = "UnknownHost";
+}
+return new TracingContext(TracingContext.validateClientCorrelationID(
+abfsConfiguration.getClientCorrelationId()),
+hostName, FSOperationType.GET_ATTR, true,
+abfsConfiguration.getTracingHeaderFormat(),
+null, abfsCounters.toString());
+  }
+
+  /**
+   * Synchronized method to suspend or resume timer.
+   * @param timerFunctionality resume or suspend.
+   * @param timerTask The timertask object.
+   * @return true or false.
+   */
+  synchronized boolean timerOrchestrator(TimerFunctionality timerFunctionality,

Review Comment:
   Design is good and doesn't need change. What I am suggesting is: we do not 
have this method as synchronized, and the actions which are taken if conditions 
are true shall be synchronized. This helps because, conditions are going to be 
true only sometimes, but we will always keep things synchronized even if there 
is no action that needs to be taken. What I proposing is:
   ```
   boolean timerOrchestrator(TimerFunctionality timerFunctionality,
 TimerTask timerTask) {
   switch (timerFunctionality) {
   case RESUME:
 if (metricCollectionStopped.get()) {
   synchronized (this) {
 if(metricCollectionStopped.get()) {
   resumeTimer();
 }
   }
 }
 break;
   case SUSPEND:
 long now = System.currentTimeMillis();
 long lastExecutionTime = abfsCounters.getLastExecutionTime().get();
 if (metricCollectionEnabled && (now - lastExecutionTime >= 
metricAnalysisPeriod)) {
   synchronized (this) {
 if (!(metricCollectionEnabled && (now - lastExecutionTime >= 
metricAnalysisPeriod))) {return false;}
 timerTask.cancel();
 timer.purge();
 metricCollectionStopped.set(true);
 return true;
   }
 }
 break;
   default:
 break;
   }
   return false;
 }
   ```
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17352. Add configuration to control whether DN delete this replica from disk when client requests a missing block [hadoop]

2024-03-12 Thread via GitHub


haiyang1987 commented on PR #6559:
URL: https://github.com/apache/hadoop/pull/6559#issuecomment-1993179784

   > Thanks involve me here. I think @zhangshuyan0 should be more professional 
about this improvement. Let's wait her/his feedback.
   
   ok, thanks for your comment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17352. Add configuration to control whether DN delete this replica from disk when client requests a missing block [hadoop]

2024-03-12 Thread via GitHub


Hexiaoqiao commented on PR #6559:
URL: https://github.com/apache/hadoop/pull/6559#issuecomment-1993154964

   Thanks involve me here. I think @zhangshuyan0 should be more professional 
about this improvement. Let's wait her/his feedback.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11660. Fix huge performance regression for SingleConstraintAppPlacementAllocator [hadoop]

2024-03-12 Thread via GitHub


zuston commented on code in PR #6623:
URL: https://github.com/apache/hadoop/pull/6623#discussion_r1522391003


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java:
##
@@ -309,6 +309,10 @@ private void decreasePendingNumAllocation() {
 // Deduct pending #allocations by 1
 ResourceSizing sizing = schedulingRequest.getResourceSizing();
 sizing.setNumAllocations(sizing.getNumAllocations() - 1);
+
+appSchedulingInfo.decPendingResource(

Review Comment:
   Done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] [ABFS] [Backport 3.4] Back Merging PRs from trunk to Branch 3.4 [hadoop]

2024-03-12 Thread via GitHub


mukund-thakur commented on PR #6611:
URL: https://github.com/apache/hadoop/pull/6611#issuecomment-1992698068

   I think it will depend on the fact first commit works as it its and is not 
dependent on second one. We need to merge these separately such that if we have 
to revert in future we can revert only one. 
   @anujmodi2021  there shouldn't be any problem with that right? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19106) [ABFS] All tests of. ITestAzureBlobFileSystemAuthorization fails with NPE

2024-03-12 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825849#comment-17825849
 ] 

Mukund Thakur commented on HADOOP-19106:


It fails because 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemAuthorization.java#L360]
  returns null. 

and this only gets initialized when authType is SAS 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java#L1733]
 

 

 

> [ABFS] All tests of. ITestAzureBlobFileSystemAuthorization fails with NPE
> -
>
> Key: HADOOP-19106
> URL: https://issues.apache.org/jira/browse/HADOOP-19106
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Anuj Modi
>Priority: Major
>
> When below config set to true all of the tests fails else it skips.
> 
>     fs.azure.test.namespace.enabled
>     true
> 
>  
> [*ERROR*] 
> testOpenFileAuthorized(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAuthorization)
>   Time elapsed: 0.064 s  <<< ERROR!
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAuthorization.runTest(ITestAzureBlobFileSystemAuthorization.java:273)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAuthorization.testOpenFileAuthorized(ITestAzureBlobFileSystemAuthorization.java:132)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19106) [ABFS] All tests of. ITestAzureBlobFileSystemAuthorization fails with NPE

2024-03-12 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825838#comment-17825838
 ] 

Mukund Thakur commented on HADOOP-19106:


It does fail for me with the same config mentioned in  
[https://github.com/apache/hadoop/pull/6069#issuecomment-1965105331] + 
fs.azure.test.namespace.enabled=true. 

 

> [ABFS] All tests of. ITestAzureBlobFileSystemAuthorization fails with NPE
> -
>
> Key: HADOOP-19106
> URL: https://issues.apache.org/jira/browse/HADOOP-19106
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Mukund Thakur
>Assignee: Anuj Modi
>Priority: Major
>
> When below config set to true all of the tests fails else it skips.
> 
>     fs.azure.test.namespace.enabled
>     true
> 
>  
> [*ERROR*] 
> testOpenFileAuthorized(org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAuthorization)
>   Time elapsed: 0.064 s  <<< ERROR!
> java.lang.NullPointerException
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAuthorization.runTest(ITestAzureBlobFileSystemAuthorization.java:273)
>  at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemAuthorization.testOpenFileAuthorized(ITestAzureBlobFileSystemAuthorization.java:132)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498)
>  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825832#comment-17825832
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992594998

   Issue seems with FIPS cases.
   
   FIPS enabled and
   
   1. bucket created on oregon, s3 client configured with `us-east-2` region 
with cross-region access enabled and no endpoint override: things look good
   2. bucket created on london, s3 client configured with `us-east-2` region 
with cross-region access enabled and no endpoint override: fails with
   ```
   Caused by: software.amazon.awssdk.core.exception.SdkClientException: 
Received an UnknownHostException when attempting to interact with a service. 
See cause for the exact endpoint that is failing to resolve. If this is 
happening on an endpoint that previously worked, there may be a network 
connectivity issue or your DNS cache could be storing endpoints for too long.
   ```
   3. bucket created on paris, s3 client configured with `us-east-2` region 
with cross-region access enabled and no endpoint override: fails with
   ```
   Caused by: software.amazon.awssdk.core.exception.SdkClientException: 
Received an UnknownHostException when attempting to interact with a service. 
See cause for the exact endpoint that is failing to resolve. If this is 
happening on an endpoint that previously worked, there may be a network 
connectivity issue or your DNS cache could be storing endpoints for too long.
   ```
   
   will create an SDK issue soon.




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992594998

   Issue seems with FIPS cases.
   
   FIPS enabled and
   
   1. bucket created on oregon, s3 client configured with `us-east-2` region 
with cross-region access enabled and no endpoint override: things look good
   2. bucket created on london, s3 client configured with `us-east-2` region 
with cross-region access enabled and no endpoint override: fails with
   ```
   Caused by: software.amazon.awssdk.core.exception.SdkClientException: 
Received an UnknownHostException when attempting to interact with a service. 
See cause for the exact endpoint that is failing to resolve. If this is 
happening on an endpoint that previously worked, there may be a network 
connectivity issue or your DNS cache could be storing endpoints for too long.
   ```
   3. bucket created on paris, s3 client configured with `us-east-2` region 
with cross-region access enabled and no endpoint override: fails with
   ```
   Caused by: software.amazon.awssdk.core.exception.SdkClientException: 
Received an UnknownHostException when attempting to interact with a service. 
See cause for the exact endpoint that is failing to resolve. If this is 
happening on an endpoint that previously worked, there may be a network 
connectivity issue or your DNS cache could be storing endpoints for too long.
   ```
   
   will create an SDK issue soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825824#comment-17825824
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992532765

   Just created a bucket in london and now i can reproduce the failure, 
checking.




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992532765

   Just created a bucket in london and now i can reproduce the failure, 
checking.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825822#comment-17825822
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

steveloughran commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1522069895


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +406,22 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+boolean isS3AccessGrantsEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false);
+if (!isS3AccessGrantsEnabled){
+  LOG.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+boolean isFallbackEnabled =
+conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+S3AccessGrantsPlugin.builder()
+.enableFallback(isFallbackEnabled)
+.build();
+builder.addPlugin(accessGrantsPlugin);
+LOG.info("S3 Access Grants plugin is enabled with IAM fallback set to {}", 
isFallbackEnabled);

Review Comment:
   might be good to add once so that on a large system the logs don't get full 
of noise. tricky choice



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.assertj.core.api.AbstractStringAssert;
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+  /**
+   * This credential provider will be attached to any client
+   * that has been configured with the S3 Access Grants plugin.
+   * {@link software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.

Review Comment:
   I don't think javadoc will resolve that; just use {@code}



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -178,6 +181,8 @@ private , ClientT> Build
 
 configureEndpointAndRegion(builder, parameters, conf);
 
+applyS3AccessGrantsConfigurations(builder, conf);

Review Comment:
   rename maybeApply... to make clear it isn't guaranteed to happen



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -5516,6 +5522,10 @@ public boolean hasPathCapability(final Path path, final 
String capability)
 case FIPS_ENDPOINT:
   return fipsEnabled;
 
+// is S3 Access Grants enabled
+case AWS_S3_ACCESS_GRANTS_ENABLED:

Review Comment:
   our bucket-info command has a list of capabilities to probe for -add this to 
the list



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -494,6 +494,11 @@ public class S3AFileSystem extends FileSystem implements 
StreamCapabilities,
*/
   private String configuredRegion;
 
+  /**
+   * Is a S3 Access Grants Enabled?

Review Comment:
   nit: "are S3 Access Grants Enabled?"



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1522069895


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +406,22 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+boolean isS3AccessGrantsEnabled = 
conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false);
+if (!isS3AccessGrantsEnabled){
+  LOG.debug("S3 Access Grants plugin is not enabled.");
+  return;
+}
+
+boolean isFallbackEnabled =
+conf.getBoolean(AWS_S3_ACCESS_GRANTS_FALLBACK_TO_IAM_ENABLED, false);
+S3AccessGrantsPlugin accessGrantsPlugin =
+S3AccessGrantsPlugin.builder()
+.enableFallback(isFallbackEnabled)
+.build();
+builder.addPlugin(accessGrantsPlugin);
+LOG.info("S3 Access Grants plugin is enabled with IAM fallback set to {}", 
isFallbackEnabled);

Review Comment:
   might be good to add once so that on a large system the logs don't get full 
of noise. tricky choice



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import org.assertj.core.api.AbstractStringAssert;
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+  /**
+   * This credential provider will be attached to any client
+   * that has been configured with the S3 Access Grants plugin.
+   * {@link software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.

Review Comment:
   I don't think javadoc will resolve that; just use {@code}



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -178,6 +181,8 @@ private , ClientT> Build
 
 configureEndpointAndRegion(builder, parameters, conf);
 
+applyS3AccessGrantsConfigurations(builder, conf);

Review Comment:
   rename maybeApply... to make clear it isn't guaranteed to happen



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -5516,6 +5522,10 @@ public boolean hasPathCapability(final Path path, final 
String capability)
 case FIPS_ENDPOINT:
   return fipsEnabled;
 
+// is S3 Access Grants enabled
+case AWS_S3_ACCESS_GRANTS_ENABLED:

Review Comment:
   our bucket-info command has a list of capabilities to probe for -add this to 
the list



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:
##
@@ -494,6 +494,11 @@ public class S3AFileSystem extends FileSystem implements 
StreamCapabilities,
*/
   private String configuredRegion;
 
+  /**
+   * Is a S3 Access Grants Enabled?

Review Comment:
   nit: "are S3 Access Grants Enabled?"



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0

[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825820#comment-17825820
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

steveloughran commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1992515204

   You're right there is no rename; copy is all there is. So that is not 
available yet? Hmmm. This isn't ready for production yet is it? Let us keep it 
in trunk for now. The other strategy would be to do a feature branch for it, 
which has mixed benefits. Good: isolated from other work. Bad: isolated from 
other work. So far the changes are minimal enough it is not a problem.
   
   Now that I am working on a bulk delete API targeting Iceberg and similar 
where the caller congener write a bulk delete call across the bucket; currently 
in S3AFS we only do bulk deletes down a "directory tree" either in delete or 
incrementally during rename(). In both of these cases there is already no 
guarantees that your system will be left in a nice state if you don't have the 
permissions to do things. 
   
   Regarding testing, when you think it is ready for others to play with a 
section in testing.md on how to get set up for this would be good. Well I don't 
personally have plans for that, maybe I could persuade colleagues. I tried to 
test a lot of the other corner cases.




> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1992515204

   You're right there is no rename; copy is all there is. So that is not 
available yet? Hmmm. This isn't ready for production yet is it? Let us keep it 
in trunk for now. The other strategy would be to do a feature branch for it, 
which has mixed benefits. Good: isolated from other work. Bad: isolated from 
other work. So far the changes are minimal enough it is not a problem.
   
   Now that I am working on a bulk delete API targeting Iceberg and similar 
where the caller congener write a bulk delete call across the bucket; currently 
in S3AFS we only do bulk deletes down a "directory tree" either in delete or 
incrementally during rename(). In both of these cases there is already no 
guarantees that your system will be left in a nice state if you don't have the 
permissions to do things. 
   
   Regarding testing, when you think it is ready for others to play with a 
section in testing.md on how to get set up for this would be good. Well I don't 
personally have plans for that, maybe I could persuade colleagues. I tried to 
test a lot of the other corner cases.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825819#comment-17825819
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992509237

   Something seems odd. This test overrides endpoint/region configs so setting 
any endpoint/region should have made no difference:
   
   ```
 @Test
 public void testCentralEndpointAndNullRegionFipsWithCRUD() throws 
Throwable {
   describe("Access the test bucket using central endpoint and"
   + " null region and fips enabled, perform file system CRUD 
operations");
   final Configuration conf = getConfiguration();
   
   final Configuration newConf = new Configuration(conf);
   
   removeBaseAndBucketOverrides(
   newConf,
   ENDPOINT,
   AWS_REGION,
   FIPS_ENDPOINT);
   
   newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
   newConf.setBoolean(FIPS_ENDPOINT, true);
   
   newFS = new S3AFileSystem();
   newFS.initialize(getFileSystem().getUri(), newConf);
   
   assertOpsUsingNewFs();
 }
   ```
   
   I tested using these settings and there is no difference in behaviour 
because the test overrides base and bucket configs for endpoint/region.
   
   I tried:
   1. endpoint: us-west-2, region: unset
   2. endpoint: central, region: unset
   3. endpoint: unset, region: unset
   
   From the stacktrace from Jira:
   ```
   [ERROR] Tests run: 18, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
56.26 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion
   [ERROR] 
testCentralEndpointAndNullRegionFipsWithCRUD(org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion)
  Time elapsed: 4.821 s  <<< ERROR!
   java.net.UnknownHostException: getFileStatus on 
s3a://stevel-london/test/testCentralEndpointAndNullRegionFipsWithCRUD/srcdir: 
software.amazon.awssdk.core.exception.SdkClientException: Received an 
UnknownHostException when attempting to interact with a service. See cause for 
the exact endpoint that is failing to resolve. If this is happening on an 
endpoint that previously worked, there may be a network connectivity issue or 
your DNS cache could be storing endpoints for too long.:
software.amazon.awssdk.core.exception.SdkClientException: Received an 
UnknownHostException when attempting to interact with a service. See cause for 
the exact endpoint that is failing to resolve. If this is happening on an 
endpoint that previously worked, there may be a network connectivity issue or 
your DNS cache could be storing endpoints for too long.: 
stevel-london.s3-fips.eu-west-2.amazonaws.com
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.fs.s3a.impl.ErrorTranslation.wrapWithInnerIOE(ErrorTranslation.java:182)
at 
org.apache.hadoop.fs.s3a.impl.ErrorTranslation.maybeExtractIOException(ErrorTranslation.java:152)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:207)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4066)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3922)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.probePathStatus(S3AFileSystem.java:3794)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.probePathStatusOrNull(MkdirOperation.java:173)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.getPathStatusExpectingDir(MkdirOperation.java:194)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:108)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
at 
org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2707)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2726)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3766)
at 

Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992509237

   Something seems odd. This test overrides endpoint/region configs so setting 
any endpoint/region should have made no difference:
   
   ```
 @Test
 public void testCentralEndpointAndNullRegionFipsWithCRUD() throws 
Throwable {
   describe("Access the test bucket using central endpoint and"
   + " null region and fips enabled, perform file system CRUD 
operations");
   final Configuration conf = getConfiguration();
   
   final Configuration newConf = new Configuration(conf);
   
   removeBaseAndBucketOverrides(
   newConf,
   ENDPOINT,
   AWS_REGION,
   FIPS_ENDPOINT);
   
   newConf.set(ENDPOINT, CENTRAL_ENDPOINT);
   newConf.setBoolean(FIPS_ENDPOINT, true);
   
   newFS = new S3AFileSystem();
   newFS.initialize(getFileSystem().getUri(), newConf);
   
   assertOpsUsingNewFs();
 }
   ```
   
   I tested using these settings and there is no difference in behaviour 
because the test overrides base and bucket configs for endpoint/region.
   
   I tried:
   1. endpoint: us-west-2, region: unset
   2. endpoint: central, region: unset
   3. endpoint: unset, region: unset
   
   From the stacktrace from Jira:
   ```
   [ERROR] Tests run: 18, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 
56.26 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion
   [ERROR] 
testCentralEndpointAndNullRegionFipsWithCRUD(org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion)
  Time elapsed: 4.821 s  <<< ERROR!
   java.net.UnknownHostException: getFileStatus on 
s3a://stevel-london/test/testCentralEndpointAndNullRegionFipsWithCRUD/srcdir: 
software.amazon.awssdk.core.exception.SdkClientException: Received an 
UnknownHostException when attempting to interact with a service. See cause for 
the exact endpoint that is failing to resolve. If this is happening on an 
endpoint that previously worked, there may be a network connectivity issue or 
your DNS cache could be storing endpoints for too long.:
software.amazon.awssdk.core.exception.SdkClientException: Received an 
UnknownHostException when attempting to interact with a service. See cause for 
the exact endpoint that is failing to resolve. If this is happening on an 
endpoint that previously worked, there may be a network connectivity issue or 
your DNS cache could be storing endpoints for too long.: 
stevel-london.s3-fips.eu-west-2.amazonaws.com
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.fs.s3a.impl.ErrorTranslation.wrapWithInnerIOE(ErrorTranslation.java:182)
at 
org.apache.hadoop.fs.s3a.impl.ErrorTranslation.maybeExtractIOException(ErrorTranslation.java:152)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:207)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4066)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3922)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.probePathStatus(S3AFileSystem.java:3794)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.probePathStatusOrNull(MkdirOperation.java:173)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.getPathStatusExpectingDir(MkdirOperation.java:194)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:108)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
at 
org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2707)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2726)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3766)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494)
at 
org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion.assertOpsUsingNewFs(ITestS3AEndpointRegion.java:461)
at 

[jira] [Assigned] (HADOOP-19088) upgrade to jersey-json 1.22.0

2024-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-19088:
---

Assignee: PJ Fanning

> upgrade to jersey-json 1.22.0
> -
>
> Key: HADOOP-19088
> URL: https://issues.apache.org/jira/browse/HADOOP-19088
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Tidies up support for Jettison and Jackson versions used by Hadoop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19088) upgrade to jersey-json 1.22.0

2024-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19088:

Fix Version/s: 3.5.0

> upgrade to jersey-json 1.22.0
> -
>
> Key: HADOOP-19088
> URL: https://issues.apache.org/jira/browse/HADOOP-19088
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> Tidies up support for Jettison and Jackson versions used by Hadoop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19088) upgrade to jersey-json 1.22.0

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825814#comment-17825814
 ] 

ASF GitHub Bot commented on HADOOP-19088:
-

steveloughran commented on PR #6585:
URL: https://github.com/apache/hadoop/pull/6585#issuecomment-1992505236

   in trunk, backport to 3.4 needed too




> upgrade to jersey-json 1.22.0
> -
>
> Key: HADOOP-19088
> URL: https://issues.apache.org/jira/browse/HADOOP-19088
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> Tidies up support for Jettison and Jackson versions used by Hadoop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19088. Use jersey-json 1.22.0 [hadoop]

2024-03-12 Thread via GitHub


steveloughran merged PR #6585:
URL: https://github.com/apache/hadoop/pull/6585


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19088. Use jersey-json 1.22.0 [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #6585:
URL: https://github.com/apache/hadoop/pull/6585#issuecomment-1992505236

   in trunk, backport to 3.4 needed too


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19088) upgrade to jersey-json 1.22.0

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825812#comment-17825812
 ] 

ASF GitHub Bot commented on HADOOP-19088:
-

steveloughran merged PR #6585:
URL: https://github.com/apache/hadoop/pull/6585




> upgrade to jersey-json 1.22.0
> -
>
> Key: HADOOP-19088
> URL: https://issues.apache.org/jira/browse/HADOOP-19088
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.3.6
>Reporter: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>
> Tidies up support for Jettison and Jackson versions used by Hadoop



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18950) upgrade avro to 1.11.3 due to CVE

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825807#comment-17825807
 ] 

ASF GitHub Bot commented on HADOOP-18950:
-

pjfanning commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1992503197

   I don't know enough about Avro to hack it to work with shaded and non-shaded 
annotations.
   
   I thought all we cared about was how to support the internal Hadoop code and 
its internal use of Avro.
   
   If we need to support users who want to do their own Avro serialization of 
Hadoop classes, then I think we should abandon this PR. I think it would be far 
easier to just upgrade the actual Avro jars that Hadoop uses and give up on 
shading it.




> upgrade avro to 1.11.3 due to CVE
> -
>
> Key: HADOOP-18950
> URL: https://issues.apache.org/jira/browse/HADOOP-18950
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Xuze Yang
>Priority: Major
>  Labels: pull-request-available
>
> [https://nvd.nist.gov/vuln/detail/CVE-2023-39410]
> When deserializing untrusted or corrupted data, it is possible for a reader 
> to consume memory beyond the allowed constraints and thus lead to out of 
> memory on the system. This issue affects Java applications using Apache Avro 
> Java SDK up to and including 1.11.2. Users should update to apache-avro 
> version 1.11.3 which addresses this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18950: shaded avro jar [hadoop]

2024-03-12 Thread via GitHub


pjfanning commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1992503197

   I don't know enough about Avro to hack it to work with shaded and non-shaded 
annotations.
   
   I thought all we cared about was how to support the internal Hadoop code and 
its internal use of Avro.
   
   If we need to support users who want to do their own Avro serialization of 
Hadoop classes, then I think we should abandon this PR. I think it would be far 
easier to just upgrade the actual Avro jars that Hadoop uses and give up on 
shading it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] [ABFS] [Backport 3.4] Back Merging PRs from trunk to Branch 3.4 [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #6611:
URL: https://github.com/apache/hadoop/pull/6611#issuecomment-1992502048

   I think what I will do here is check this branch out then cherrypick each PR 
in order, so that they are still two separate PRs. Do you see any problem with 
this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825790#comment-17825790
 ] 

Steve Loughran commented on HADOOP-19102:
-

what is the exception when its wrong and is there a way to disable it? this can 
go into the JIRA text for people who encounter it on 3.4.0

> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825789#comment-17825789
 ] 

ASF GitHub Bot commented on HADOOP-19102:
-

steveloughran commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522047711


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -54,9 +63,44 @@ public class ITestAbfsInputStreamReadFooter extends 
ITestAbfsInputStream {
   private static final int TEN = 10;
   private static final int TWENTY = 20;
 
+  private static ExecutorService executorService;
+
+  private static final int SIZE_256_KB = 256 * ONE_KB;
+
+  private static final Integer[] FILE_SIZES = {

Review Comment:
   This is going to make a slower test on remote runs. Does it really have to 
be this big or is it possible to tune things so that They work with smaller 
files? Because if this is the restriction then it is going to have to become a 
scale test, which will not be run as often.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void testPartialReadWithSomeData() throws Exception {
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(true,
-fileSize, footerReadBufferSize);
-String fileName = methodName.getMethodName() + i;
-byte[] fileContent = getRandomBytesArray(fileSize);
-Path testFilePath = createFileWithContent(fs, fileName, fileContent);
-testPartialReadWithSomeData(fs, testFilePath,
-fileSize - AbfsInputStream.FOOTER_SIZE, 
AbfsInputStream.FOOTER_SIZE,
-fileContent, footerReadBufferSize);
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  String fileName = methodName.getMethodName() + fileId;
+  byte[] fileContent = getRandomBytesArray(fileSize);
+  Path testFilePath = createFileWithContent(spiedFs, fileName,
+  fileContent);
+  testParialReadWithSomeData(spiedFs, fileSize, testFilePath,
+  fileContent);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  }));
+}
+for (Future future : futureList) {
+  future.get();
+}
+  }
+
+  private void testParialReadWithSomeData(final AzureBlobFileSystem spiedFs,

Review Comment:
   nit: typo



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -71,22 +115,40 @@ public void 
testMultipleServerCallsAreMadeWhenTheConfIsFalse()
   private void testNumBackendCalls(boolean optimizeFooterRead)
   throws Exception {
 int fileIdx = 0;
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(
-optimizeFooterRead, fileSize);
-Path testFilePath = createPathAndFileWithContent(
-fs, fileIdx++, fileSize);
+final List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  Future future = executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  Path testPath = createPathAndFileWithContent(
+  spiedFs, fileId, fileSize);
+  testNumBackendCalls(spiedFs, optimizeFooterRead, fileSize,
+  testPath);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  });
+  futureList.add(future);
+}
+for (Future future : futureList) {

Review Comment:
   I'm going to suggest that in org.apache.hadoop.util.functional.FutureIO you 
add a new awaitFutures(Collection) method, which iterates through the 
collection and calls awaitFuture on each. And yes, you should be passing down a 
timeout, as when Junit times out It is less informative.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void 

Re: [PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on code in PR #6617:
URL: https://github.com/apache/hadoop/pull/6617#discussion_r1522047711


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -54,9 +63,44 @@ public class ITestAbfsInputStreamReadFooter extends 
ITestAbfsInputStream {
   private static final int TEN = 10;
   private static final int TWENTY = 20;
 
+  private static ExecutorService executorService;
+
+  private static final int SIZE_256_KB = 256 * ONE_KB;
+
+  private static final Integer[] FILE_SIZES = {

Review Comment:
   This is going to make a slower test on remote runs. Does it really have to 
be this big or is it possible to tune things so that They work with smaller 
files? Because if this is the restriction then it is going to have to become a 
scale test, which will not be run as often.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void testPartialReadWithSomeData() throws Exception {
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(true,
-fileSize, footerReadBufferSize);
-String fileName = methodName.getMethodName() + i;
-byte[] fileContent = getRandomBytesArray(fileSize);
-Path testFilePath = createFileWithContent(fs, fileName, fileContent);
-testPartialReadWithSomeData(fs, testFilePath,
-fileSize - AbfsInputStream.FOOTER_SIZE, 
AbfsInputStream.FOOTER_SIZE,
-fileContent, footerReadBufferSize);
+int fileIdx = 0;
+List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  futureList.add(executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  String fileName = methodName.getMethodName() + fileId;
+  byte[] fileContent = getRandomBytesArray(fileSize);
+  Path testFilePath = createFileWithContent(spiedFs, fileName,
+  fileContent);
+  testParialReadWithSomeData(spiedFs, fileSize, testFilePath,
+  fileContent);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  }));
+}
+for (Future future : futureList) {
+  future.get();
+}
+  }
+
+  private void testParialReadWithSomeData(final AzureBlobFileSystem spiedFs,

Review Comment:
   nit: typo



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -71,22 +115,40 @@ public void 
testMultipleServerCallsAreMadeWhenTheConfIsFalse()
   private void testNumBackendCalls(boolean optimizeFooterRead)
   throws Exception {
 int fileIdx = 0;
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * ONE_KB;
-final AzureBlobFileSystem fs = getFileSystem(
-optimizeFooterRead, fileSize);
-Path testFilePath = createPathAndFileWithContent(
-fs, fileIdx++, fileSize);
+final List futureList = new ArrayList<>();
+for (int fileSize : FILE_SIZES) {
+  final int fileId = fileIdx++;
+  Future future = executorService.submit(() -> {
+try (AzureBlobFileSystem spiedFs = createSpiedFs(
+getRawConfiguration())) {
+  Path testPath = createPathAndFileWithContent(
+  spiedFs, fileId, fileSize);
+  testNumBackendCalls(spiedFs, optimizeFooterRead, fileSize,
+  testPath);
+} catch (Exception ex) {
+  throw new RuntimeException(ex);
+}
+  });
+  futureList.add(future);
+}
+for (Future future : futureList) {

Review Comment:
   I'm going to suggest that in org.apache.hadoop.util.functional.FutureIO you 
add a new awaitFutures(Collection) method, which iterates through the 
collection and calls awaitFuture on each. And yes, you should be passing down a 
timeout, as when Junit times out It is less informative.



##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/ITestAbfsInputStreamReadFooter.java:
##
@@ -322,28 +434,52 @@ private void testPartialReadWithNoData(final FileSystem 
fs,
 
   @Test
   public void testPartialReadWithSomeData() throws Exception {
-for (int i = 0; i <= 4; i++) {
-  for (int j = 0; j <= 2; j++) {
-int fileSize = (int) Math.pow(2, i) * 256 * ONE_KB;
-int footerReadBufferSize = (int) Math.pow(2, j) * 256 * 

[jira] [Commented] (HADOOP-19108) S3 Express: document use

2024-03-12 Thread Dongjoon Hyun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825788#comment-17825788
 ] 

Dongjoon Hyun commented on HADOOP-19108:


Is this targeting Apache Hadoop 3.4.1?

> S3 Express: document use
> 
>
> Key: HADOOP-19108
> URL: https://issues.apache.org/jira/browse/HADOOP-19108
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> The 3.4.0 release doesn't explicitly cover S3 Express.
> It's support is automatic
> * library handles it
> * hadoop shell commands know that there may be "missing" dirs in treewalks 
> due to in-flight uploads
> * s3afs automatically switches to deleting pending uploads in delete(dir) 
> call.
> we just need to provide a summary of features, how to probe etc.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19102:

Affects Version/s: 3.4.0

> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19102:

Fix Version/s: (was: 3.4.0)
   (was: 3.5.0)

> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825785#comment-17825785
 ] 

Steve Loughran commented on HADOOP-19102:
-

[~pranavsaxena] can I remind you to set the bus you want to get in into target 
version, not fix. that is only used for branches where the fix has been merged 
in; it is used for release note generation

> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18950: shaded avro jar [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on code in PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#discussion_r1522032834


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java:
##
@@ -27,7 +27,7 @@
 import java.util.Optional;
 import java.util.regex.Pattern;
 
-import org.apache.avro.reflect.Stringable;
+import org.apache.hadoop.thirdparty.avro.reflect.Stringable;

Review Comment:
   This could be dangerous, as we are saying that a public class can no longer 
be serialised through Avro.
   
   Do you think it will be possible for us to retain the unshaded annotation as 
well as adding the new one? And still have everything to work without Avro on 
the CP?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AvroFSInput.java:
##
@@ -21,7 +21,7 @@
 import java.io.Closeable;
 import java.io.IOException;
 
-import org.apache.avro.file.SeekableInput;
+import org.apache.hadoop.thirdparty.avro.file.SeekableInput;

Review Comment:
   again, this is a public class we don't use internally.
   
   Should we actually deprecate it? I don't know what uses it?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18950) upgrade avro to 1.11.3 due to CVE

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825784#comment-17825784
 ] 

ASF GitHub Bot commented on HADOOP-18950:
-

steveloughran commented on code in PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#discussion_r1522032834


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java:
##
@@ -27,7 +27,7 @@
 import java.util.Optional;
 import java.util.regex.Pattern;
 
-import org.apache.avro.reflect.Stringable;
+import org.apache.hadoop.thirdparty.avro.reflect.Stringable;

Review Comment:
   This could be dangerous, as we are saying that a public class can no longer 
be serialised through Avro.
   
   Do you think it will be possible for us to retain the unshaded annotation as 
well as adding the new one? And still have everything to work without Avro on 
the CP?



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AvroFSInput.java:
##
@@ -21,7 +21,7 @@
 import java.io.Closeable;
 import java.io.IOException;
 
-import org.apache.avro.file.SeekableInput;
+import org.apache.hadoop.thirdparty.avro.file.SeekableInput;

Review Comment:
   again, this is a public class we don't use internally.
   
   Should we actually deprecate it? I don't know what uses it?





> upgrade avro to 1.11.3 due to CVE
> -
>
> Key: HADOOP-18950
> URL: https://issues.apache.org/jira/browse/HADOOP-18950
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Xuze Yang
>Priority: Major
>  Labels: pull-request-available
>
> [https://nvd.nist.gov/vuln/detail/CVE-2023-39410]
> When deserializing untrusted or corrupted data, it is possible for a reader 
> to consume memory beyond the allowed constraints and thus lead to out of 
> memory on the system. This issue affects Java applications using Apache Avro 
> Java SDK up to and including 1.11.2. Users should update to apache-avro 
> version 1.11.3 which addresses this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825781#comment-17825781
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992431076

   rebasing both trunk and branch-3.4 before re-running the tests.




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992431076

   rebasing both trunk and branch-3.4 before re-running the tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18950) upgrade avro to 1.11.3 due to CVE

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825778#comment-17825778
 ] 

ASF GitHub Bot commented on HADOOP-18950:
-

steveloughran commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1992402992

   let's do it in 3.4.1 after a 1.3.0 release, and make the "we've tuned the 
packaging" a key change along with "we've fixed the bits steve broke". 




> upgrade avro to 1.11.3 due to CVE
> -
>
> Key: HADOOP-18950
> URL: https://issues.apache.org/jira/browse/HADOOP-18950
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Xuze Yang
>Priority: Major
>  Labels: pull-request-available
>
> [https://nvd.nist.gov/vuln/detail/CVE-2023-39410]
> When deserializing untrusted or corrupted data, it is possible for a reader 
> to consume memory beyond the allowed constraints and thus lead to out of 
> memory on the system. This issue affects Java applications using Apache Avro 
> Java SDK up to and including 1.11.2. Users should update to apache-avro 
> version 1.11.3 which addresses this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18950: shaded avro jar [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #4854:
URL: https://github.com/apache/hadoop/pull/4854#issuecomment-1992402992

   let's do it in 3.4.1 after a 1.3.0 release, and make the "we've tuned the 
packaging" a key change along with "we've fixed the bits steve broke". 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825777#comment-17825777
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

steveloughran commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992389645

   looking at my current settings I've set endpoint to london but the region is 
unset; making sure that the classic binding mechanism still works.
   
   {code}
 
   fs.s3a.bucket.stevel-london.endpoint
   ${london.endpoint}
 
   
 
   X.fs.s3a.bucket.stevel-london.endpoint.region
   ${london.region}
 
   
   {code}
   




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992389645

   looking at my current settings I've set endpoint to london but the region is 
unset; making sure that the classic binding mechanism still works.
   
   {code}
 
   fs.s3a.bucket.stevel-london.endpoint
   ${london.endpoint}
 
   
 
   X.fs.s3a.bucket.stevel-london.endpoint.region
   ${london.region}
 
   
   {code}
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825774#comment-17825774
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992374906

   I will re-run the test suite and followup.




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992374906

   I will re-run the test suite and followup.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825773#comment-17825773
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

steveloughran commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992373059

   not good on branch-3.4; we need a followup i'm afraid. leaving in trunk 
rather than reverting for now as the other tests all seem happy.




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992373059

   not good on branch-3.4; we need a followup i'm afraid. leaving in trunk 
rather than reverting for now as the other tests all seem happy.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825771#comment-17825771
 ] 

Steve Loughran commented on HADOOP-19066:
-

afraid things break for me with a test bucket set up for s3 london. full stack 
set below. I'm not going to revert, but we will need a followup...I won't 
cherrypick to branch-3.4 until then

{code}
[INFO] Running org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion
[ERROR] Tests run: 18, Failures: 0, Errors: 1, Skipped: 1, Time elapsed: 56.26 
s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion
[ERROR] 
testCentralEndpointAndNullRegionFipsWithCRUD(org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion)
  Time elapsed: 4.821 s  <<< ERROR!
java.net.UnknownHostException: getFileStatus on 
s3a://stevel-london/test/testCentralEndpointAndNullRegionFipsWithCRUD/srcdir: 
software.amazon.awssdk.core.exception.SdkClientException: Received an 
UnknownHostException when attempting to interact with a service. See cause for 
the exact endpoint that is failing to resolve. If this is happening on an 
endpoint that previously worked, there may be a network connectivity issue or 
your DNS cache could be storing endpoints for too long.:
software.amazon.awssdk.core.exception.SdkClientException: Received an 
UnknownHostException when attempting to interact with a service. See cause for 
the exact endpoint that is failing to resolve. If this is happening on an 
endpoint that previously worked, there may be a network connectivity issue or 
your DNS cache could be storing endpoints for too long.: 
stevel-london.s3-fips.eu-west-2.amazonaws.com
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at 
org.apache.hadoop.fs.s3a.impl.ErrorTranslation.wrapWithInnerIOE(ErrorTranslation.java:182)
at 
org.apache.hadoop.fs.s3a.impl.ErrorTranslation.maybeExtractIOException(ErrorTranslation.java:152)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:207)
at 
org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:155)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4066)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3922)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.probePathStatus(S3AFileSystem.java:3794)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.probePathStatusOrNull(MkdirOperation.java:173)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.getPathStatusExpectingDir(MkdirOperation.java:194)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:108)
at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
at 
org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2707)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2726)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3766)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494)
at 
org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion.assertOpsUsingNewFs(ITestS3AEndpointRegion.java:461)
at 
org.apache.hadoop.fs.s3a.ITestS3AEndpointRegion.testCentralEndpointAndNullRegionFipsWithCRUD(ITestS3AEndpointRegion.java:454)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 

[jira] [Reopened] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-19066:
-

> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18910) ABFS: Adding Support for MD5 Hash based integrity verification of the request content during transport

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825768#comment-17825768
 ] 

ASF GitHub Bot commented on HADOOP-18910:
-

steveloughran commented on PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#issuecomment-1992362377

   think its time for the default for namespace.enabled to become true? I'd 
support that




> ABFS: Adding Support for MD5 Hash based integrity verification of the request 
> content during transport 
> ---
>
> Key: HADOOP-18910
> URL: https://issues.apache.org/jira/browse/HADOOP-18910
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Anuj Modi
>Assignee: Anuj Modi
>Priority: Major
>  Labels: pull-request-available
>
> Azure Storage Supports Content-MD5 Request Headers in Both Read and Append 
> APIs.
> Read: [Path - Read - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read]
> Append: [Path - Update - REST API (Azure Storage Services) | Microsoft 
> Learn|https://learn.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/update]
> This change is to make client-side changes to support them. In Read request, 
> we will send the appropriate header in response to which server will return 
> the MD5 Hash of the data it sends back. On Client we will tally this with the 
> MD5 hash computed from the data received.
> In Append request, we will compute the MD5 Hash of the data that we are 
> sending to the server and specify that in appropriate header. Server on 
> finding that header will tally this with the MD5 hash it will compute on the 
> data received. 
> This whole Checksum Validation Support is guarded behind a config, Config is 
> by default disabled because with the use of "https" integrity of data is 
> preserved anyways. This is introduced as an additional data integrity check 
> which will have a performance impact as well.
> Users can decide if they want to enable this or not by setting the following 
> config to *"true"* or *"false"* respectively. *Config: 
> "fs.azure.enable.checksum.validation"*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18910: [ABFS] Adding Support for MD5 Hash based integrity verification of the request content during transport [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #6069:
URL: https://github.com/apache/hadoop/pull/6069#issuecomment-1992362377

   think its time for the default for namespace.enabled to become true? I'd 
support that


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825765#comment-17825765
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

steveloughran commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992329398

   (testing cherrypick; if all is good will merge to 3.4.x)




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19066:

Fix Version/s: 3.5.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.5.0
>
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


steveloughran commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1992329398

   (testing cherrypick; if all is good will merge to 3.4.x)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825764#comment-17825764
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

steveloughran merged PR #6539:
URL: https://github.com/apache/hadoop/pull/6539




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-12 Thread via GitHub


steveloughran merged PR #6539:
URL: https://github.com/apache/hadoop/pull/6539


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19108) S3 Express: document use

2024-03-12 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19108:
---

 Summary: S3 Express: document use
 Key: HADOOP-19108
 URL: https://issues.apache.org/jira/browse/HADOOP-19108
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran


The 3.4.0 release doesn't explicitly cover S3 Express.

It's support is automatic
* library handles it
* hadoop shell commands know that there may be "missing" dirs in treewalks due 
to in-flight uploads
* s3afs automatically switches to deleting pending uploads in delete(dir) call.

we just need to provide a summary of features, how to probe etc.





--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18996) S3A to provide full support for S3 Express One Zone

2024-03-12 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-18996:

Description: 
HADOOP-18995 upgrades the SDK version which allows connecting to a s3 express 
one zone support. 

Complete support needs to be added to address tests that fail with s3 express 
one zone, additional tests, documentation etc. 

* hadoop-common path capability to indicate that treewalking may encounter 
missing dirs
* use this in treewalking code in shell, mapreduce FileInputFormat etc to not 
fail during treewalks
* extra path capability for s3express too.
* tests for this
* anything else

A filesystem can now be probed for inconsistent directory listings through 
{{fs.hasPathCapability(path, "fs.capability.directory.listing.inconsistent")}}

If true, then treewalking code SHOULD NOT report a failure if, when walking 
into a subdirectory, a list/getFileStatus on that directory raises a 
FileNotFoundExceptin.


  was:
HADOOP-18995 upgrades the SDK version which allows connecting to a s3 express 
one zone support. 

Complete support needs to be added to address tests that fail with s3 express 
one zone, additional tests, documentation etc. 

* hadoop-common path capability to indicate that treewalking may encounter 
missing dirs
* use this in treewalking code in shell, mapreduce FileInputFormat etc to not 
fail during treewalks
* extra path capability for s3express too.
* tests for this
* anything else

A filesystem can now be probed for inconsistent directoriy listings through 
{{fs.hasPathCapability(path, "fs.capability.directory.listing.inconsistent")}}

If true, then treewalking code SHOULD NOT report a failure if, when walking 
into a subdirectory, a list/getFileStatus on that directory raises a 
FileNotFoundExceptin.



> S3A to provide full support for S3 Express One Zone
> ---
>
> Key: HADOOP-18996
> URL: https://issues.apache.org/jira/browse/HADOOP-18996
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.7-aws
>
>
> HADOOP-18995 upgrades the SDK version which allows connecting to a s3 express 
> one zone support. 
> Complete support needs to be added to address tests that fail with s3 express 
> one zone, additional tests, documentation etc. 
> * hadoop-common path capability to indicate that treewalking may encounter 
> missing dirs
> * use this in treewalking code in shell, mapreduce FileInputFormat etc to not 
> fail during treewalks
> * extra path capability for s3express too.
> * tests for this
> * anything else
> A filesystem can now be probed for inconsistent directory listings through 
> {{fs.hasPathCapability(path, "fs.capability.directory.listing.inconsistent")}}
> If true, then treewalking code SHOULD NOT report a failure if, when walking 
> into a subdirectory, a list/getFileStatus on that directory raises a 
> FileNotFoundExceptin.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17383:Datanode current block token should come from active NameNode in HA mode [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6562:
URL: https://github.com/apache/hadoop/pull/6562#issuecomment-1992228065

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  39m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  7s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  4s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/5/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 557 unchanged 
- 0 fixed = 560 total (was 557)  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  40m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 261m 31s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 414m 51s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6562 |
   | JIRA Issue | HDFS-17383 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux db44668f7452 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 955fdbb8b8c9c3d7bb7cd15e37f421acbcb694a3 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6562/5/testReport/ |
   | Max. process+thread count | 3047 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#issuecomment-1991971058

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  37m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  38m 15s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 19s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/13/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 10 unchanged - 0 
fixed = 15 total (was 10)  |
   | -1 :x: |  mvnsite  |   0m 28s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/13/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  37m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 158m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6314 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux 5c87cf08c1f1 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b2ae082f797efb8cc8b5a34d00f22aab947d662c |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/13/testReport/ |
   | Max. process+thread count 

Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#issuecomment-1991866091

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  19m 40s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 11s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/14/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 10 unchanged - 0 
fixed = 15 total (was 10)  |
   | -1 :x: |  mvnsite  |   0m 16s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/14/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 24s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/14/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6314 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux ad184893fe7d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ab9ecdcb0f115de0a1222081ca52fb2ab1d6301b |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/14/testReport/ |
   | Max. process+thread count 

Re: [PR] MAPREDUCE-7402. fix mapreduce.task.io.sort.factor=1 lead to an infinite loop. [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6622:
URL: https://github.com/apache/hadoop/pull/6622#issuecomment-1991831364

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 42s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 44s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  88m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6622/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6622 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f353da2d86a4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 30c888e79489d73ffe367235561e0d89f9b29cce |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6622/3/testReport/ |
   | Max. process+thread count | 1645 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6622/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: 

Re: [PR] YARN-11660. Fix huge performance regression for SingleConstraintAppPlacementAllocator [hadoop]

2024-03-12 Thread via GitHub


slfan1989 commented on code in PR #6623:
URL: https://github.com/apache/hadoop/pull/6623#discussion_r1521601149


##
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/placement/SingleConstraintAppPlacementAllocator.java:
##
@@ -309,6 +309,10 @@ private void decreasePendingNumAllocation() {
 // Deduct pending #allocations by 1
 ResourceSizing sizing = schedulingRequest.getResourceSizing();
 sizing.setNumAllocations(sizing.getNumAllocations() - 1);
+
+appSchedulingInfo.decPendingResource(

Review Comment:
   one line may be better.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#discussion_r1521449729


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1411,6 +1448,97 @@ protected AccessTokenProvider getTokenProvider() {
 return tokenProvider;
   }
 
+  public AzureBlobFileSystem getMetricFilesystem() throws IOException {
+if (metricFs == null) {
+  try {
+Configuration metricConfig = abfsConfiguration.getRawConfiguration();
+String metricAccountKey = 
metricConfig.get(FS_AZURE_METRIC_ACCOUNT_KEY);
+final String abfsMetricUrl = metricConfig.get(FS_AZURE_METRIC_URI);
+if (abfsMetricUrl == null) {
+  return null;
+}
+metricConfig.set(FS_AZURE_ACCOUNT_KEY_PROPERTY_NAME, metricAccountKey);
+metricConfig.set(AZURE_CREATE_REMOTE_FILESYSTEM_DURING_INITIALIZATION, 
"false");
+URI metricUri;
+metricUri = new URI(FileSystemUriSchemes.ABFS_SCHEME, abfsMetricUrl, 
null, null, null);
+metricFs = (AzureBlobFileSystem) FileSystem.newInstance(metricUri, 
metricConfig);

Review Comment:
   The primary intention behind having a distinct configuration for the metric 
file system is to offer users the flexibility to choose where the additional 
metric calls are directed. Users can configure the system to send metrics 
either to the existing file system or to a separate file system based on their 
preferences. If a separate account is utilized for the metric URI, 
authentication may fail because we are currently employing the existing 
AbfsClient instance. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#discussion_r1521424714


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1655,7 +1694,7 @@ void setIsNamespaceEnabled(final Boolean 
isNamespaceEnabled) {
* Getter for abfsCounters from AbfsClient.
* @return AbfsCounters instance.
*/
-  protected AbfsCounters getAbfsCounters() {
+  public AbfsCounters getAbfsCounters() {

Review Comment:
   This leads to problems as some AzureBlobFileSystem and AbfsConfiguration 
methods used in this class are not public and hence cannot be called directly 
in the services package



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#discussion_r1521405518


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/AzureServiceErrorCode.java:
##
@@ -99,24 +104,22 @@ public static AzureServiceErrorCode 
getAzureServiceCode(int httpStatusCode, Stri
 return azureServiceErrorCode;
   }
 }
-
 return UNKNOWN;
   }
 
-  public static AzureServiceErrorCode getAzureServiceCode(int httpStatusCode, 
String errorCode, final String errorMessage) {
+  public static AzureServiceErrorCode getAzureServiceCode(int httpStatusCode, 
String errorCode, String errorMessage) {

Review Comment:
   This was done because there was a bug in the code, the exact error message 
that needed to be matched was obtained in the first line of the string. 
Retained the final.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18854) add options to disable range merging of vectored io

2024-03-12 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18854?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825659#comment-17825659
 ] 

Steve Loughran commented on HADOOP-18854:
-

thanks. will close as done.

> add options to disable range merging of vectored io
> ---
>
> Key: HADOOP-18854
> URL: https://issues.apache.org/jira/browse/HADOOP-18854
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3
>Affects Versions: 3.3.5, 3.3.6
>Reporter: Steve Loughran
>Priority: Major
>
> I'm seeing test failures in my PARQUET-2171 pr because assertions about the 
> #of bytes read isn't holding -small files are being read and the vector range 
> merging is pulling in the whole file.
> ```
> [ERROR]   TestInputOutputFormat.testReadWriteWithCounter:338 bytestotal != 
> bytesread expected:<5510> but was:<11020>
> ```
> I think for parquet i will add an option to disable vector io, but really the 
> filesystems which support it should allow for merging to be disabled



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7402. fix mapreduce.task.io.sort.factor=1 lead to an infinite loop. [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6622:
URL: https://github.com/apache/hadoop/pull/6622#issuecomment-1991492005

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 15s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 16s | 
[/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6622/2/artifact/out/results-checkstyle-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-core.txt)
 |  
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core: 
The patch generated 1 new + 89 unchanged - 0 fixed = 90 total (was 89)  |
   | +1 :green_heart: |  mvnsite  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m 34s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  86m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6622/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6622 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e87213e1045c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ea7635ffaa001610007801913de7af90bc37cff3 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6622/2/testReport/ |
   | Max. process+thread count | 1560 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
U: 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core |
   | Console output | 

Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-12 Thread via GitHub


anujmodi2021 commented on code in PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#discussion_r1521326382


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -160,6 +162,7 @@ public class AzureBlobFileSystem extends FileSystem
 
   /** Storing full path uri for better logging. */
   private URI fullPathUri;
+  private AzureBlobFileSystem metricFs = null;

Review Comment:
   I guess its not used anymore??



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java:
##
@@ -108,6 +108,8 @@ public final class FileSystemConfigurations {
   public static final boolean DEFAULT_ENABLE_FLUSH = true;
   public static final boolean DEFAULT_DISABLE_OUTPUTSTREAM_FLUSH = true;
   public static final boolean DEFAULT_ENABLE_AUTOTHROTTLING = true;
+  public static final int DEFAULT_METRIC_IDLE_TIMEOUT_MS = 60 * 1000;

Review Comment:
   Use 60_000
   for both the values



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java:
##
@@ -190,6 +196,8 @@ public final class ConfigurationKeys {
* character constraints are not satisfied. **/
   public static final String FS_AZURE_CLIENT_CORRELATIONID = 
"fs.azure.client.correlationid";
   public static final String FS_AZURE_TRACINGHEADER_FORMAT = 
"fs.azure.tracingheader.format";
+  public static final String FS_AZURE_TRACINGMETRICHEADER_FORMAT = 
"fs.azure.tracingmetricheader.format";

Review Comment:
   One of these only needed



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java:
##
@@ -2211,4 +2211,20 @@ private void populateRenameRecoveryStatistics(
   abfsCounters.incrementCounter(METADATA_INCOMPLETE_RENAME_FAILURES, 1);
 }
   }
+
+  /**
+   * Sends a metric using the provided TracingContext.
+   *
+   * @param tracingContextMetric The TracingContext used for sending the 
metric.
+   * @throws AzureBlobFileSystemException If an error occurs while sending the 
metric.
+   *
+   * 
+   * This method retrieves filesystem properties using the specified 
TracingContext. The metrics are sent
+   * via this call in the header x-ms-feclient-metrics. As part of next 
iteration this additional call will be removed.
+   * 
+   */
+  public void sendMetric(TracingContext tracingContextMetric)

Review Comment:
   Not used, can be removed



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsClient.java:
##
@@ -1655,7 +1694,7 @@ void setIsNamespaceEnabled(final Boolean 
isNamespaceEnabled) {
* Getter for abfsCounters from AbfsClient.
* @return AbfsCounters instance.
*/
-  protected AbfsCounters getAbfsCounters() {
+  public AbfsCounters getAbfsCounters() {

Review Comment:
   Is it needs to be protected only for usage in tests?
   If so may be we can move the tests in same package (services package inside 
test)



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/services/AzureServiceErrorCode.java:
##
@@ -99,24 +104,22 @@ public static AzureServiceErrorCode 
getAzureServiceCode(int httpStatusCode, Stri
 return azureServiceErrorCode;
   }
 }
-
 return UNKNOWN;
   }
 
-  public static AzureServiceErrorCode getAzureServiceCode(int httpStatusCode, 
String errorCode, final String errorMessage) {
+  public static AzureServiceErrorCode getAzureServiceCode(int httpStatusCode, 
String errorCode, String errorMessage) {

Review Comment:
   Why was final removed from errorMessageString??
   Also why do we need to split the error message?



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -850,6 +867,18 @@ public TracingHeaderFormat getTracingHeaderFormat() {
 return getEnum(FS_AZURE_TRACINGHEADER_FORMAT, 
TracingHeaderFormat.ALL_ID_FORMAT);
   }
 
+  /**
+   * Enum config to allow user to pick format of x-ms-client-request-id header
+   * @return tracingContextFormat config if valid, else default ALL_ID_FORMAT
+   */
+  public TracingHeaderFormat getTracingMetricHeaderFormat() {

Review Comment:
   Not used, can be removed



##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java:
##
@@ -72,7 +72,7 @@ public final class FileSystemConfigurations {
   public static final boolean DEFAULT_AZURE_ENABLE_SMALL_WRITE_OPTIMIZATION = 
false;
   public static final int DEFAULT_READ_BUFFER_SIZE = 4 * ONE_MB;  // 4 MB
   public static final boolean DEFAULT_READ_SMALL_FILES_COMPLETELY = false;
-  public static final boolean DEFAULT_OPTIMIZE_FOOTER_READ = true;
+  public static final boolean DEFAULT_OPTIMIZE_FOOTER_READ = false;

Review Comment:
   We should keep it true only in the PR.
   To run the test maybe you can temporarily make it 

Re: [PR] YARN-11660. Fix huge performance regression for SingleConstraintAppPlacementAllocator [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6623:
URL: https://github.com/apache/hadoop/pull/6623#issuecomment-1991401418

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 51s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  88m 49s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 25s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 171m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6623/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6623 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 5a9aac54ba70 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4eba7856a91f4b35a57b40e6cabb6ed3cb58d5c9 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6623/1/testReport/ |
   | Max. process+thread count | 926 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6623/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To 

Re: [PR] HDFS-17352. Add configuration to control whether DN delete this replica from disk when client requests a missing block [hadoop]

2024-03-12 Thread via GitHub


haiyang1987 commented on PR #6559:
URL: https://github.com/apache/hadoop/pull/6559#issuecomment-1991339659

   Hi @Hexiaoqiao Sir would you mind look it again, thanks~


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17391. Adjust the checkpoint io buffer size to the chunk size. [hadoop]

2024-03-12 Thread via GitHub


Hexiaoqiao commented on PR #6594:
URL: https://github.com/apache/hadoop/pull/6594#issuecomment-1991324410

   Committed to trunk. Thanks @ThinkerLei .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17391. Adjust the checkpoint io buffer size to the chunk size. [hadoop]

2024-03-12 Thread via GitHub


Hexiaoqiao merged PR #6594:
URL: https://github.com/apache/hadoop/pull/6594


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7402. fix mapreduce.task.io.sort.factor=1 lead to an infinite loop. [hadoop]

2024-03-12 Thread via GitHub


KeeProMise commented on code in PR #6622:
URL: https://github.com/apache/hadoop/pull/6622#discussion_r1521225842


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java:
##
@@ -229,6 +232,14 @@ public void testInMemoryAndOnDiskMerger() throws Throwable 
{
 Assert.assertEquals(0, mergeManager.inMemoryMapOutputs.size());
 Assert.assertEquals(0, mergeManager.inMemoryMergedMapOutputs.size());
 Assert.assertEquals(0, mergeManager.onDiskMapOutputs.size());
+
+jobConf.set("mapreduce.task.io.sort.factor", "1");
+thrown.expectMessage("Invalid value for mapreduce.task.io.sort.factor: 1," 
+
+" please set it to a number greater than 1");
+thrown.expect(IllegalArgumentException.class);
+new MergeManagerImpl(reduceId2, jobConf, fs, lda, 
Reporter.NULL, null,
+null, null, null, null,
+null, null, new Progress(), new MROutputFiles());

Review Comment:
   done



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java:
##
@@ -229,6 +232,14 @@ public void testInMemoryAndOnDiskMerger() throws Throwable 
{
 Assert.assertEquals(0, mergeManager.inMemoryMapOutputs.size());
 Assert.assertEquals(0, mergeManager.inMemoryMergedMapOutputs.size());
 Assert.assertEquals(0, mergeManager.onDiskMapOutputs.size());
+
+jobConf.set("mapreduce.task.io.sort.factor", "1");

Review Comment:
   done



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17422. Enhance the stability of the unit test TestDFSAdmin. [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6621:
URL: https://github.com/apache/hadoop/pull/6621#issuecomment-1991182311

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 52s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 264m 26s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 418m 57s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6621/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6621 |
   | JIRA Issue | HDFS-17422 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux b820f6829fc4 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 68093bf49607dbe0405e79d1133c8c4361a0cb9a |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6621/2/testReport/ |
   | Max. process+thread count | 2768 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6621/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact 

Re: [PR] HDFS-17391:Adjust the checkpoint io buffer size to the chunk size [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6594:
URL: https://github.com/apache/hadoop/pull/6594#issuecomment-1991164298

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 19s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 4 unchanged - 4 
fixed = 4 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 12s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 197m 35s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 284m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6594/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6594 |
   | JIRA Issue | HDFS-17391 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux c2bf11a81112 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 65068f6fc48a714f873f4952464b9df0e1d158ab |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6594/8/testReport/ |
   | Max. process+thread count | 4734 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6594/8/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the 

[jira] [Commented] (HADOOP-19107) Drop support for HBase v1

2024-03-12 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825580#comment-17825580
 ] 

Ayush Saxena commented on HADOOP-19107:
---

Currently v2 is broken for me
{noformat}
[INFO] --- enforcer:3.0.0:enforce (depcheck) @ 
hadoop-yarn-server-timelineservice-hbase-client ---
[WARNING] Rule 1: org.apache.maven.plugins.enforcer.BannedDependencies failed 
with message:
Found Banned Dependency: org.slf4j:slf4j-log4j12:jar:1.7.25
Use 'mvn dependency:tree' to locate the source of the banned dependencies.
{noformat}
Tried "mvn clean install -DskipTests -Dhbase.profile=2.0", need some exclusion 
for sl4j somewhere I believe

> Drop support for HBase v1
> -
>
> Key: HADOOP-19107
> URL: https://issues.apache.org/jira/browse/HADOOP-19107
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Ayush Saxena
>Priority: Major
>
> Drop support for Hbase V1 and make building Hbase v2 default.
> Dev List:
> [https://lists.apache.org/thread/vb2gh5ljwncbrmqnk0oflb8ftdz64hhs]
> https://lists.apache.org/thread/o88hnm7q8n3b4bng81q14vsj3fbhfx5w



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19107) Drop support for HBase v1

2024-03-12 Thread Ayush Saxena (Jira)
Ayush Saxena created HADOOP-19107:
-

 Summary: Drop support for HBase v1
 Key: HADOOP-19107
 URL: https://issues.apache.org/jira/browse/HADOOP-19107
 Project: Hadoop Common
  Issue Type: Task
Reporter: Ayush Saxena


Drop support for Hbase V1 and make building Hbase v2 default.

Dev List:

[https://lists.apache.org/thread/vb2gh5ljwncbrmqnk0oflb8ftdz64hhs]

https://lists.apache.org/thread/o88hnm7q8n3b4bng81q14vsj3fbhfx5w



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825568#comment-17825568
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

hadoop-yetus commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-1991044330

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 13s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 29s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/7/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 138m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6552 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux f80ee1d270a6 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 817f7cbc3bf21f013c61ff0fdabb9fadde02149f |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/7/testReport/ |
   | Max. process+thread count | 695 (vs. ulimit of 

Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-03-12 Thread via GitHub


hadoop-yetus commented on PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#issuecomment-1991044330

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  11m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 13s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | -1 :x: |  mvnsite  |   0m 29s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/7/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  32m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 138m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6552 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux f80ee1d270a6 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 817f7cbc3bf21f013c61ff0fdabb9fadde02149f |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/7/testReport/ |
   | Max. process+thread count | 695 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6552/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | 

[PR] YARN-11660. Fix huge performance regression for SingleConstraintAppPlacementAllocator [hadoop]

2024-03-12 Thread via GitHub


zuston opened a new pull request, #6623:
URL: https://github.com/apache/hadoop/pull/6623

   ### Description of PR
   When using the `SingleConstraintAppPlacementAllocator` with scheduling 
request in our internal cluster, I found the huge performance regression from 
the metric of `allocateAvgTime`. 
   
   After digging this bug, I found this dangerous bug that will always checking 
the non-pending resource apps due to missing of desc pending resource for one 
app for the async scheduling threads.
   
   
   ### How was this patch tested?
   
   Has been applied in our internal cluster
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] MAPREDUCE-7402. fix mapreduce.task.io.sort.factor=1 lead to an infinite loop. [hadoop]

2024-03-12 Thread via GitHub


ayushtkn commented on code in PR #6622:
URL: https://github.com/apache/hadoop/pull/6622#discussion_r1521008831


##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java:
##
@@ -229,6 +232,14 @@ public void testInMemoryAndOnDiskMerger() throws Throwable 
{
 Assert.assertEquals(0, mergeManager.inMemoryMapOutputs.size());
 Assert.assertEquals(0, mergeManager.inMemoryMergedMapOutputs.size());
 Assert.assertEquals(0, mergeManager.onDiskMapOutputs.size());
+
+jobConf.set("mapreduce.task.io.sort.factor", "1");
+thrown.expectMessage("Invalid value for mapreduce.task.io.sort.factor: 1," 
+
+" please set it to a number greater than 1");
+thrown.expect(IllegalArgumentException.class);
+new MergeManagerImpl(reduceId2, jobConf, fs, lda, 
Reporter.NULL, null,
+null, null, null, null,
+null, null, new Progress(), new MROutputFiles());

Review Comment:
   We should use ``LambdaTestUtils.intercept`` rather than this ``thrown.`` 
thing



##
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/test/java/org/apache/hadoop/mapreduce/task/reduce/TestMerger.java:
##
@@ -229,6 +232,14 @@ public void testInMemoryAndOnDiskMerger() throws Throwable 
{
 Assert.assertEquals(0, mergeManager.inMemoryMapOutputs.size());
 Assert.assertEquals(0, mergeManager.inMemoryMergedMapOutputs.size());
 Assert.assertEquals(0, mergeManager.onDiskMapOutputs.size());
+
+jobConf.set("mapreduce.task.io.sort.factor", "1");

Review Comment:
   can you use ``IO_SORT_FACTOR`` rather than hardcoding the config value



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825549#comment-17825549
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1503721707


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemChooseSAS.java:
##
@@ -128,11 +130,18 @@ public void testOnlyFixedTokenConfigured() throws 
Exception {
 try (AzureBlobFileSystem newTestFs = (AzureBlobFileSystem)
 FileSystem.newInstance(testAbfsConfig.getRawConfiguration())) {
 
-  // Asserting that account SAS is used as both filesystem and blob level 
operations succeed.
-  newTestFs.getFileStatus(new Path("/"));
-  Path testPath = new Path("/testCorrectSASToken");
-  newTestFs.create(testPath).close();
-  newTestFs.delete(new Path("/"), true);
+  // Asserting that FixedSASTokenProvider is used.
+  Assertions.assertThat(testAbfsConfig.getSASTokenProvider())
+  .describedAs("Custom SASTokenProvider Class must be used")
+  .isInstanceOf(FixedSASTokenProvider.class);
+
+  // Assert that Account SAS is used and only read operations are 
permitted.

Review Comment:
   Why was create passing in the last test case and would give Access Denied 
exception now ?





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1503721707


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemChooseSAS.java:
##
@@ -128,11 +130,18 @@ public void testOnlyFixedTokenConfigured() throws 
Exception {
 try (AzureBlobFileSystem newTestFs = (AzureBlobFileSystem)
 FileSystem.newInstance(testAbfsConfig.getRawConfiguration())) {
 
-  // Asserting that account SAS is used as both filesystem and blob level 
operations succeed.
-  newTestFs.getFileStatus(new Path("/"));
-  Path testPath = new Path("/testCorrectSASToken");
-  newTestFs.create(testPath).close();
-  newTestFs.delete(new Path("/"), true);
+  // Asserting that FixedSASTokenProvider is used.
+  Assertions.assertThat(testAbfsConfig.getSASTokenProvider())
+  .describedAs("Custom SASTokenProvider Class must be used")
+  .isInstanceOf(FixedSASTokenProvider.class);
+
+  // Assert that Account SAS is used and only read operations are 
permitted.

Review Comment:
   Why was create passing in the last test case and would give Access Denied 
exception now ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825548#comment-17825548
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1503719618


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemChooseSAS.java:
##
@@ -97,16 +99,16 @@ public void testBothProviderFixedTokenConfigured() throws 
Exception {
 // Creating a new file system with updated configs.
 try (AzureBlobFileSystem newTestFs = (AzureBlobFileSystem)
 FileSystem.newInstance(testAbfsConfig.getRawConfiguration())) {
-  TracingContext tracingContext = getTestTracingContext(newTestFs, true);
 
-  // Asserting that filesystem level operations fails with User Delegation 
SAS.
-  intercept(SASTokenProviderException.class, () -> {
-newTestFs.getAbfsStore().getFilesystemProperties(tracingContext);
-  });
+  // Asserting that MockDelegationSASTokenProvider is used.
+  Assertions.assertThat(testAbfsConfig.getSASTokenProvider())
+  .describedAs("Custom SASTokenProvider Class must be used")
+  .isInstanceOf(MockDelegationSASTokenProvider.class);
 
-  // Asserting that User delegation SAS token is otherwise valid and blob 
level operations succeed.
-  Path testPath = new Path("/testCorrectSASToken");
+  // Assert that User Delegation SAS is used and both read and write 
operations are permitted.
+  Path testPath = path(getMethodName());
   newTestFs.create(testPath).close();
+  newTestFs.open(testPath).close();

Review Comment:
   The testPath is already closed, the need for this additional statement ?





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1503719618


##
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemChooseSAS.java:
##
@@ -97,16 +99,16 @@ public void testBothProviderFixedTokenConfigured() throws 
Exception {
 // Creating a new file system with updated configs.
 try (AzureBlobFileSystem newTestFs = (AzureBlobFileSystem)
 FileSystem.newInstance(testAbfsConfig.getRawConfiguration())) {
-  TracingContext tracingContext = getTestTracingContext(newTestFs, true);
 
-  // Asserting that filesystem level operations fails with User Delegation 
SAS.
-  intercept(SASTokenProviderException.class, () -> {
-newTestFs.getAbfsStore().getFilesystemProperties(tracingContext);
-  });
+  // Asserting that MockDelegationSASTokenProvider is used.
+  Assertions.assertThat(testAbfsConfig.getSASTokenProvider())
+  .describedAs("Custom SASTokenProvider Class must be used")
+  .isInstanceOf(MockDelegationSASTokenProvider.class);
 
-  // Asserting that User delegation SAS token is otherwise valid and blob 
level operations succeed.
-  Path testPath = new Path("/testCorrectSASToken");
+  // Assert that User Delegation SAS is used and both read and write 
operations are permitted.
+  Path testPath = path(getMethodName());
   newTestFs.create(testPath).close();
+  newTestFs.open(testPath).close();

Review Comment:
   The testPath is already closed, the need for this additional statement ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825547#comment-17825547
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1520976609


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/FixedSASTokenProvider.java:
##
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider;
+
+public class FixedSASTokenProvider implements SASTokenProvider {

Review Comment:
   javadocs for the class 





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1520976609


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/FixedSASTokenProvider.java:
##
@@ -0,0 +1,46 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import java.io.IOException;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.azurebfs.extensions.SASTokenProvider;
+
+public class FixedSASTokenProvider implements SASTokenProvider {

Review Comment:
   javadocs for the class 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825543#comment-17825543
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1520974197


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -1308,10 +1308,9 @@ public void access(final Path path, final FsAction mode) 
throws IOException {
 
   /**
* Incrementing exists() calls from superclass for statistic collection.
-   *
* @param f source path.
* @return true if the path exists.
-   * @throws IOException
+   * @throws IOException if some issue in checking path

Review Comment:
   . at the end





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
> Key: HADOOP-18516
> URL: https://issues.apache.org/jira/browse/HADOOP-18516
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.4.0
>Reporter: Sree Bhattacharyya
>Assignee: Anuj Modi
>Priority: Minor
>  Labels: pull-request-available
>
> This PR introduces a new configuration for Fixed SAS Tokens: 
> *"fs.azure.sas.fixed.token"*
> Using this new configuration, users can configure a fixed SAS Token in the 
> account settings files itself. Ideally, this should be used with SAS Tokens 
> that are scoped at a container or account level (Service or Account SAS), 
> which can be considered to be a constant for one account or container, over 
> multiple operations.
> The other method of using a SAS Token remains valid as well, where a user 
> provides a custom implementation of the SASTokenProvider interface, using 
> which a SAS Token are obtained.
> When an Account SAS Token is configured as the fixed SAS Token, and it is 
> used, it is ensured that operations are within the scope of the SAS Token.
> The code checks for whether the fixed token and the token provider class 
> implementation are configured. In the case of both being set, preference is 
> given to the custom SASTokenProvider implementation. It must be noted that if 
> such an implementation provides a SAS Token which has a lower scope than 
> Account SAS, some filesystem and service level operations might be out of 
> scope and may not succeed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1520974197


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystem.java:
##
@@ -1308,10 +1308,9 @@ public void access(final Path path, final FsAction mode) 
throws IOException {
 
   /**
* Incrementing exists() calls from superclass for statistic collection.
-   *
* @param f source path.
* @return true if the path exists.
-   * @throws IOException
+   * @throws IOException if some issue in checking path

Review Comment:
   . at the end



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18516) [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider Implementation

2024-03-12 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825542#comment-17825542
 ] 

ASF GitHub Bot commented on HADOOP-18516:
-

anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1520972493


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -976,33 +977,60 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = 
this.rawConfig.get(FS_AZURE_SAS_FIXED_TOKEN,
+  null);
+
+  Preconditions.checkArgument(
+  customSasTokenProviderImplementation != null || configuredFixedToken 
!= null,
+  "At least one of the \"%s\" and \"%s\" must be set.",
+  FS_AZURE_SAS_TOKEN_PROVIDER_TYPE, FS_AZURE_SAS_FIXED_TOKEN);
+
+  // Prefer Custom SASTokenProvider Implementation if configured.
+  if (customSasTokenProviderImplementation != null) {
+LOG.trace("Using Custom SASTokenProvider implementation because it is 
given precedence when it is set.");
+SASTokenProvider sasTokenProvider = ReflectionUtils.newInstance(
+customSasTokenProviderImplementation, rawConfig);
+Preconditions.checkArgument(sasTokenProvider != null,
+"Failed to initialize %s", customSasTokenProviderImplementation);
+
+LOG.trace("Initializing {}", 
customSasTokenProviderImplementation.getName());
+sasTokenProvider.initialize(rawConfig, accountName);
+LOG.trace("{} init complete", 
customSasTokenProviderImplementation.getName());
+return sasTokenProvider;
+  } else {
+LOG.trace("Using FixedSASTokenProvider implementation");
+FixedSASTokenProvider fixedSASTokenProvider = new 
FixedSASTokenProvider(configuredFixedToken);
+return fixedSASTokenProvider;
+  }
 } catch (Exception e) {
-  throw new TokenAccessProviderException("Unable to load SAS token 
provider class: " + e, e);
+  throw new TokenAccessProviderException(
+  "Unable to load SAS token provider class: " + e, e);

Review Comment:
   Use {} instead of concatenate 





> [ABFS]: Support fixed SAS token config in addition to Custom SASTokenProvider 
> Implementation
> 
>
>   

Re: [PR] HADOOP-18516: [ABFS][Authentication] Support Fixed SAS Token for ABFS Authentication [hadoop]

2024-03-12 Thread via GitHub


anmolanmol1234 commented on code in PR #6552:
URL: https://github.com/apache/hadoop/pull/6552#discussion_r1520972493


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java:
##
@@ -976,33 +977,60 @@ public AccessTokenProvider getTokenProvider() throws 
TokenAccessProviderExceptio
 }
   }
 
+  /**
+   * Returns the SASTokenProvider implementation to be used to generate SAS 
token.
+   * Users can choose between a custom implementation of {@link 
SASTokenProvider}
+   * or an in house implementation {@link FixedSASTokenProvider}.
+   * For Custom implementation "fs.azure.sas.token.provider.type" needs to be 
provided.
+   * For Fixed SAS Token use "fs.azure.sas.fixed.token" needs to be 
provided.
+   * In case both are provided, Preference will be given to Custom 
implementation.
+   * Avoid using a custom tokenProvider implementation just to read the 
configured
+   * fixed token, as this could create confusion. Also,implementing the 
SASTokenProvider
+   * requires relying on the raw configurations. It is more stable to depend on
+   * the AbfsConfiguration with which a filesystem is initialized, and 
eliminate
+   * chances of dynamic modifications and spurious situations.
+   * @return sasTokenProvider object based on configurations provided
+   * @throws AzureBlobFileSystemException
+   */
   public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemException {
 AuthType authType = getEnum(FS_AZURE_ACCOUNT_AUTH_TYPE_PROPERTY_NAME, 
AuthType.SharedKey);
 if (authType != AuthType.SAS) {
   throw new SASTokenProviderException(String.format(
-"Invalid auth type: %s is being used, expecting SAS", authType));
+  "Invalid auth type: %s is being used, expecting SAS.", authType));
 }
 
 try {
-  String configKey = FS_AZURE_SAS_TOKEN_PROVIDER_TYPE;
-  Class sasTokenProviderClass =
-  getTokenProviderClass(authType, configKey, null,
-  SASTokenProvider.class);
-
-  Preconditions.checkArgument(sasTokenProviderClass != null,
-  String.format("The configuration value for \"%s\" is invalid.", 
configKey));
-
-  SASTokenProvider sasTokenProvider = ReflectionUtils
-  .newInstance(sasTokenProviderClass, rawConfig);
-  Preconditions.checkArgument(sasTokenProvider != null,
-  String.format("Failed to initialize %s", sasTokenProviderClass));
-
-  LOG.trace("Initializing {}", sasTokenProviderClass.getName());
-  sasTokenProvider.initialize(rawConfig, accountName);
-  LOG.trace("{} init complete", sasTokenProviderClass.getName());
-  return sasTokenProvider;
+  Class customSasTokenProviderImplementation =
+  getTokenProviderClass(authType, FS_AZURE_SAS_TOKEN_PROVIDER_TYPE,
+  null, SASTokenProvider.class);
+  String configuredFixedToken = 
this.rawConfig.get(FS_AZURE_SAS_FIXED_TOKEN,
+  null);
+
+  Preconditions.checkArgument(
+  customSasTokenProviderImplementation != null || configuredFixedToken 
!= null,
+  "At least one of the \"%s\" and \"%s\" must be set.",
+  FS_AZURE_SAS_TOKEN_PROVIDER_TYPE, FS_AZURE_SAS_FIXED_TOKEN);
+
+  // Prefer Custom SASTokenProvider Implementation if configured.
+  if (customSasTokenProviderImplementation != null) {
+LOG.trace("Using Custom SASTokenProvider implementation because it is 
given precedence when it is set.");
+SASTokenProvider sasTokenProvider = ReflectionUtils.newInstance(
+customSasTokenProviderImplementation, rawConfig);
+Preconditions.checkArgument(sasTokenProvider != null,
+"Failed to initialize %s", customSasTokenProviderImplementation);
+
+LOG.trace("Initializing {}", 
customSasTokenProviderImplementation.getName());
+sasTokenProvider.initialize(rawConfig, accountName);
+LOG.trace("{} init complete", 
customSasTokenProviderImplementation.getName());
+return sasTokenProvider;
+  } else {
+LOG.trace("Using FixedSASTokenProvider implementation");
+FixedSASTokenProvider fixedSASTokenProvider = new 
FixedSASTokenProvider(configuredFixedToken);
+return fixedSASTokenProvider;
+  }
 } catch (Exception e) {
-  throw new TokenAccessProviderException("Unable to load SAS token 
provider class: " + e, e);
+  throw new TokenAccessProviderException(
+  "Unable to load SAS token provider class: " + e, e);

Review Comment:
   Use {} instead of concatenate 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: 

  1   2   >