[jira] [Commented] (HADOOP-19066) AWS SDK V2 - Enabling FIPS should be allowed with central endpoint

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824646#comment-17824646
 ] 

ASF GitHub Bot commented on HADOOP-19066:
-

virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1985209550

   @ahmarsuhail @mukund-thakur could you please review this PR?




> AWS SDK V2 - Enabling FIPS should be allowed with central endpoint
> --
>
> Key: HADOOP-19066
> URL: https://issues.apache.org/jira/browse/HADOOP-19066
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.5.0, 3.4.1
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> FIPS support can be enabled by setting "fs.s3a.endpoint.fips". Since the SDK 
> considers overriding endpoint and enabling fips as mutually exclusive, we 
> fail fast if fs.s3a.endpoint is set with fips support (details on 
> HADOOP-18975).
> Now, we no longer override SDK endpoint for central endpoint since we enable 
> cross region access (details on HADOOP-19044) but we would still fail fast if 
> endpoint is central and fips is enabled.
> Changes proposed:
>  * S3A to fail fast only if FIPS is enabled and non-central endpoint is 
> configured.
>  * Tests to ensure S3 bucket is accessible with default region us-east-2 with 
> cross region access (expected with central endpoint).
>  * Document FIPS support with central endpoint on connecting.html.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19066. S3A: AWS SDK V2 - Enabling FIPS should be allowed with central endpoint [hadoop]

2024-03-07 Thread via GitHub


virajjasani commented on PR #6539:
URL: https://github.com/apache/hadoop/pull/6539#issuecomment-1985209550

   @ahmarsuhail @mukund-thakur could you please review this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Add Shuyan Zhang to Committer List. [hadoop-site]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #52:
URL: https://github.com/apache/hadoop-site/pull/52#issuecomment-1985107577

   Committed. Thanks all.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Add Shuyan Zhang to Committer List. [hadoop-site]

2024-03-07 Thread via GitHub


Hexiaoqiao merged PR #52:
URL: https://github.com/apache/hadoop-site/pull/52


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-19102:

Labels: pull-request-available  (was: )

> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.5.0
>
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824625#comment-17824625
 ] 

ASF GitHub Bot commented on HADOOP-19102:
-

saxenapranav opened a new pull request, #6617:
URL: https://github.com/apache/hadoop/pull/6617

   JIRA: https://issues.apache.org/jira/browse/HADOOP-19102
   
   The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
read more data than the buffer array can hold, which causes an exception.
   
   Change: To avoid this, we will keep footerBufferSize = 
min(readBufferSizeConfig, footerBufferSizeConfig)
   
   
   Also, as part of this PR, have improved tests within 
`ITestAbfsInputStreamReadFooter`. There are tests which have multiple 
combination, and there was file getting created for all the combination. There 
has to be a combination on different fileSize. 
   The change: We will spin up one thread each for each fileSize. And in each 
thread, all the combination for that particular fileSize will run. This will 
help in creating file once for a fileSize and multiple fileSize related 
assertion can happen in parallel and use hardware capability.
   Improvement: on a 6 processor VM, on trunk, it tool 8min47sec to run all 
tests of ITestAbfsInputStreamReadFooter and in the PR branch, it took 7 min. 
(Each test method run one after another).
   




> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
> Fix For: 3.4.0, 3.5.0
>
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] HADOOP-19102. FooterReadBufferSize should not be greater than readBufferSize [hadoop]

2024-03-07 Thread via GitHub


saxenapranav opened a new pull request, #6617:
URL: https://github.com/apache/hadoop/pull/6617

   JIRA: https://issues.apache.org/jira/browse/HADOOP-19102
   
   The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
read more data than the buffer array can hold, which causes an exception.
   
   Change: To avoid this, we will keep footerBufferSize = 
min(readBufferSizeConfig, footerBufferSizeConfig)
   
   
   Also, as part of this PR, have improved tests within 
`ITestAbfsInputStreamReadFooter`. There are tests which have multiple 
combination, and there was file getting created for all the combination. There 
has to be a combination on different fileSize. 
   The change: We will spin up one thread each for each fileSize. And in each 
thread, all the combination for that particular fileSize will run. This will 
help in creating file once for a fileSize and multiple fileSize related 
assertion can happen in parallel and use hardware capability.
   Improvement: on a 6 processor VM, on trunk, it tool 8min47sec to run all 
tests of ITestAbfsInputStreamReadFooter and in the PR branch, it took 7 min. 
(Each test method run one after another).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-19102) [ABFS]: FooterReadBufferSize should not be greater than readBufferSize

2024-03-07 Thread Pranav Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pranav Saxena updated HADOOP-19102:
---
Description: 
The method `optimisedRead` creates a buffer array of size `readBufferSize`. If 
footerReadBufferSize is greater than readBufferSize, abfs will attempt to read 
more data than the buffer array can hold, which causes an exception.

Change: To avoid this, we will keep footerBufferSize = 
min(readBufferSizeConfig, footerBufferSizeConfig)

 

 

  was:
The method `optimisedRead` creates a buffer array of size `readBufferSize`. If 
footerReadBufferSize is greater than readBufferSize, abfs will attempt to read 
more data than the buffer array can hold, which causes an exception.

Change: To avoid this, we will assign readBufferSize to footerReadBufferSize 
when footerReadBufferSize is larger than readBufferSize.

 

 


> [ABFS]: FooterReadBufferSize should not be greater than readBufferSize
> --
>
> Key: HADOOP-19102
> URL: https://issues.apache.org/jira/browse/HADOOP-19102
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Pranav Saxena
>Assignee: Pranav Saxena
>Priority: Major
> Fix For: 3.4.0, 3.5.0
>
>
> The method `optimisedRead` creates a buffer array of size `readBufferSize`. 
> If footerReadBufferSize is greater than readBufferSize, abfs will attempt to 
> read more data than the buffer array can hold, which causes an exception.
> Change: To avoid this, we will keep footerBufferSize = 
> min(readBufferSizeConfig, footerBufferSizeConfig)
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19085) Compatibility Benchmark over HCFS Implementations

2024-03-07 Thread Kai Zheng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824611#comment-17824611
 ] 

Kai Zheng commented on HADOOP-19085:


[~han.liu] Good work! Just a minor, would you grep the PR codes ""hdfs 
compatibility" and refine some bit? Thanks!

> Compatibility Benchmark over HCFS Implementations
> -
>
> Key: HADOOP-19085
> URL: https://issues.apache.org/jira/browse/HADOOP-19085
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Han Liu
>Assignee: Han Liu
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDFS Compatibility Benchmark Design.pdf
>
>
> {*}Background:{*}Hadoop-Compatible File System (HCFS) is a core conception in 
> big data storage ecosystem, providing unified interfaces and generally clear 
> semantics, and has become the de-factor standard for industry storage systems 
> to follow and conform with. There have been a series of HCFS implementations 
> in Hadoop, such as S3AFileSystem for Amazon's S3 Object Store, WASB for 
> Microsoft's Azure Blob Storage and OSS connector for Alibaba Cloud Object 
> Storage, and more from storage service's providers on their own.
> {*}Problems:{*}However, as indicated by introduction.md, there is no formal 
> suite to do compatibility assessment of a file system for all such HCFS 
> implementations. Thus, whether the functionality is well accomplished and 
> meets the core compatible expectations mainly relies on service provider's 
> own report. Meanwhile, Hadoop is also developing and new features are 
> continuously contributing to HCFS interfaces for existing implementations to 
> follow and update, in which case, Hadoop also needs a tool to quickly assess 
> if these features are supported or not for a specific HCFS implementation. 
> Besides, the known hadoop command line tool or hdfs shell is used to directly 
> interact with a HCFS storage system, where most commands correspond to 
> specific HCFS interfaces and work well. Still, there are cases that are 
> complicated and may not work, like expunge command. To check such commands 
> for an HCFS, we also need an approach to figure them out.
> {*}Proposal:{*}Accordingly, we propose to define a formal HCFS compatibility 
> benchmark and provide corresponding tool to do the compatibility assessment 
> for an HCFS storage system. The benchmark and tool should consider both HCFS 
> interfaces and hdfs shell commands. Different scenarios require different 
> kinds of compatibilities. For such consideration, we could define different 
> suites in the benchmark.
> *Benefits:* We intend the benchmark and tool to be useful for both storage 
> providers and storage users. For end users, it can be used to evalute the 
> compatibility level and determine if the storage system in question is 
> suitable for the required scenarios. For storage providers, it helps to 
> quickly generate an objective and reliable report about core functioins of 
> the storage service. As an instance, if the HCFS got a 100% on a suite named 
> 'tpcds', it is demonstrated that all functions needed by a tpcds program have 
> been well achieved. It is also a guide indicating how storage service 
> abilities can map to HCFS interfaces, such as storage class on S3.
> Any thoughts? Comments and feedback are mostly welcomed. Thanks in advance.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824610#comment-17824610
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1517130806


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1624,4 +1624,21 @@ private Constants() {
* Value: {@value}.
*/
   public static final boolean DEFAULT_AWS_S3_CLASSLOADER_ISOLATION = true;
+
+  /**
+   * Flag {@value}
+   * to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.s3accessgrants.enabled";

Review Comment:
   Done.



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1624,4 +1624,21 @@ private Constants() {
* Value: {@value}.
*/
   public static final boolean DEFAULT_AWS_S3_CLASSLOADER_ISOLATION = true;
+
+  /**
+   * Flag {@value}
+   * to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.s3accessgrants.enabled";

Review Comment:
   Done.





> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-07 Thread via GitHub


adnanhemani commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1517130806


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1624,4 +1624,21 @@ private Constants() {
* Value: {@value}.
*/
   public static final boolean DEFAULT_AWS_S3_CLASSLOADER_ISOLATION = true;
+
+  /**
+   * Flag {@value}
+   * to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.s3accessgrants.enabled";

Review Comment:
   Done.



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1624,4 +1624,21 @@ private Constants() {
* Value: {@value}.
*/
   public static final boolean DEFAULT_AWS_S3_CLASSLOADER_ISOLATION = true;
+
+  /**
+   * Flag {@value}
+   * to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.s3accessgrants.enabled";

Review Comment:
   Done.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] Add Shuyan Zhang to Committer List. [hadoop-site]

2024-03-07 Thread via GitHub


Hexiaoqiao opened a new pull request, #52:
URL: https://github.com/apache/hadoop-site/pull/52

   Shuyan Zhang is one of Committers[1], but not update site yet.
   
   [1] https://lists.apache.org/thread/33lfm5dlkhybq55jh3vf13fsg5f3q5tl


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16822) Provide source artifacts for hadoop-client-api

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824609#comment-17824609
 ] 

ASF GitHub Bot commented on HADOOP-16822:
-

pan3793 commented on PR #6458:
URL: https://github.com/apache/hadoop/pull/6458#issuecomment-1984979072

   I think we already reached the consensus in the JIRA discussion, could any 
committer take a look?
   
   cc @slfan1989 @sunchao
   




> Provide source artifacts for hadoop-client-api
> --
>
> Key: HADOOP-16822
> URL: https://issues.apache.org/jira/browse/HADOOP-16822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.3.1, 3.4.0, 3.2.3
>Reporter: Karel Kolman
>Assignee: Karel Kolman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.2.3
>
> Attachments: HADOOP-16822-hadoop-client-api-source-jar.patch
>
>
> h5. Improvement request
> The third-party libraries shading hadoop-client-api (& hadoop-client-runtime) 
> artifacts are super useful.
>  
> Having uber source jar for hadoop-client-api (maybe even 
> hadoop-client-runtime) would be great for downstream development & debugging 
> purposes.
> Are there any obstacles or objections against providing fat jar with all the 
> hadoop client api as well ?
> h5. Dev links
> - *maven-shaded-plugin* and its *shadeSourcesContent* attribute
> - 
> https://maven.apache.org/plugins/maven-shade-plugin/shade-mojo.html#shadeSourcesContent



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Revert "HADOOP-16822. Provide source artifacts for hadoop-client-api" [hadoop]

2024-03-07 Thread via GitHub


pan3793 commented on PR #6458:
URL: https://github.com/apache/hadoop/pull/6458#issuecomment-1984979072

   I think we already reached the consensus in the JIRA discussion, could any 
committer take a look?
   
   cc @slfan1989 @sunchao
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17408. Reduce quota calculation times in FSDirRenameOp. [hadoop]

2024-03-07 Thread via GitHub


ThinkerLei commented on PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#issuecomment-1984976217

   > Hi @ThinkerLei , Please check if the failed unite tests are related with 
this changes.
   
   @Hexiaoqiao  Thanks for your reply, I will work on this soon.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17408. Reduce quota calculation times in FSDirRenameOp. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#issuecomment-1984973314

   Hi @ThinkerLei , Please check if the failed unite tests are related with 
this changes.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17380. FsImageValidation: remove inaccessible nodes. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6549:
URL: https://github.com/apache/hadoop/pull/6549#issuecomment-1984971681

   @szetszwo Thanks for your response.
   
   > This may not be acceptable in some use cases since the newly created files 
will be lost (i.e. data loss) if we recover from an earlier fsimage.
   
   Recover from one earlier checkpoint will not loss data, it will keep both 
fsimage and all editlog util the latest transaction. 
   
   > If we remove the inaccessible inodes, we won't lose any files.
   
   When you talk about `inaccessible inode`,  do you mean NameNode unexpected 
logic cause some inodes are unreachable?
   
   > this is just a tool to fix fsimages. Users may choose not to use it if 
they are fine to recover from an earlier fsimage.
   
   +1. Will involve to review once understand what it will improve. Thanks 
again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17364. EC: Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#issuecomment-1984965658

   > @Hexiaoqiao @zhangshuyan0 Thanks for your reviews. I realized there's also 
an ElasticBufferPool in DFSStripedOutputStream. I'm thinking of handling that 
here as well. What do you think?
   
   +1 from my side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17364. EC: Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on code in PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#discussion_r1517110356


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:
##
@@ -3159,6 +3165,21 @@ private void initThreadsNumForStripedReads(int 
numThreads) {
 }
   }
 
+  private void initBufferPoolForStripedReads(boolean useWeakReference) {
+if (STRIPED_READ_BUFFER_POOL != null) {
+  return;
+}
+synchronized (DFSClient.class) {

Review Comment:
   Got it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824600#comment-17824600
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

hadoop-yetus commented on PR #6606:
URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1984948590

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 78 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  30m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  27m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |  25m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   2m 30s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   0m 54s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 30s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 10s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 38s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-client-modules/hadoop-client-api no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 38s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 36s |  |  
branch/hadoop-client-modules/hadoop-client-integration-tests no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-cloud-storage-project/hadoop-cloud-storage no spotbugs output 
file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  35m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 28s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | 

Re: [PR] HADOOP-15984. Update jersey from 1.19 to 2.x [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6606:
URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1984948590

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  4s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 78 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  34m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   4m 43s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  30m 52s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  27m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |  25m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   2m 30s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   0m 54s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 30s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 10s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/7/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 38s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-client-modules/hadoop-client-api no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 38s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 36s |  |  
branch/hadoop-client-modules/hadoop-client-integration-tests no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  
branch/hadoop-cloud-storage-project/hadoop-cloud-storage no spotbugs output 
file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  35m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  35m 28s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 27s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 40s | 

Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6612:
URL: https://github.com/apache/hadoop/pull/6612#issuecomment-1984392158

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 44s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 14s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   2m 16s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  branch-3.3 passed  |
   | -1 :x: |  spotbugs  |   1m 27s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in branch-3.3 has 2 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  22m  3s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  hadoop-hdfs-project: The 
patch generated 0 new + 249 unchanged - 3 fixed = 249 total (was 252)  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  3s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 47s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 121m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +0 :ok: |  asflicense  |   0m 27s |  |  ASF License check generated no 
output?  |
   |  |   | 225m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestDecommissionWithStripedBackoffMonitor 
|
   |   | hadoop.hdfs.TestEncryptedTransfer |
   |   | hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
   |   | hadoop.hdfs.server.datanode.TestBatchIbr |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
   |   | hadoop.hdfs.server.datanode.TestDataNodeFaultInjector |
   |   | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations |
   |   | hadoop.hdfs.server.datanode.TestBlockRecovery2 |
   |   | hadoop.hdfs.TestParallelUnixDomainRead |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6612 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e6f838e8b04d 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 2cfb16244810628ba6c4c6b1282a3f50568c302a |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/3/testReport/ |
   

Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]

2024-03-07 Thread via GitHub


ritegarg commented on PR #6612:
URL: https://github.com/apache/hadoop/pull/6612#issuecomment-1984310623

   > There are few test failures. Can you please take a look? @ritegarg
   
   I was looking into the failures, looks like transient failures. The same 
tests are running fine locally. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]

2024-03-07 Thread via GitHub


shahrs87 commented on PR #6612:
URL: https://github.com/apache/hadoop/pull/6612#issuecomment-1984012754

   There are few test failures. Can you please take a look? @ritegarg 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824477#comment-17824477
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

steveloughran commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1516477473


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1624,4 +1624,21 @@ private Constants() {
* Value: {@value}.
*/
   public static final boolean DEFAULT_AWS_S3_CLASSLOADER_ISOLATION = true;
+
+  /**
+   * Flag {@value}
+   * to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.s3accessgrants.enabled";

Review Comment:
   1. can you use "fs.s3a.access.grants" as the prefix here and below
   2. It'd be good have s3afs .hasPathCapability() return the enabled flag for 
ease of testing



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+  /**
+   * This credential provider will be attached to any client
+   * that has been configured with the S3 Access Grants plugin.
+   * {@link software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+   */
+  public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+  S3AccessGrantsIdentityProvider.class.getName();
+
+  @Test
+  public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(

Review Comment:
   1. I prefer AssertJ asserts with useful .description() values in new test 
suites. AssertEquals is not as bad as the others: it does generate a message, 
but more details are good.
   
   2. the same assert and operation is being used everywhere. Factor it out 
into a method and call it where needed.
   



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,20 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){

Review Comment:
   define and use a constant `AWS_S3_ACCESS_GRANTS_ENABLED` here.
   
   makes it easier to see/change what the default is in future.



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the 

Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-07 Thread via GitHub


steveloughran commented on code in PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#discussion_r1516477473


##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Constants.java:
##
@@ -1624,4 +1624,21 @@ private Constants() {
* Value: {@value}.
*/
   public static final boolean DEFAULT_AWS_S3_CLASSLOADER_ISOLATION = true;
+
+  /**
+   * Flag {@value}
+   * to enable S3 Access Grants to control authorization to S3 data. More 
information:
+   * https://aws.amazon.com/s3/features/access-grants/
+   * and
+   * https://github.com/aws/aws-s3-accessgrants-plugin-java-v2/
+   */
+  public static final String AWS_S3_ACCESS_GRANTS_ENABLED = 
"fs.s3a.s3accessgrants.enabled";

Review Comment:
   1. can you use "fs.s3a.access.grants" as the prefix here and below
   2. It'd be good have s3afs .hasPathCapability() return the enabled flag for 
ease of testing



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.s3a;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.test.AbstractHadoopTestBase;
+import org.junit.Assert;
+import org.junit.Test;
+
+import software.amazon.awssdk.awscore.AwsClient;
+import 
software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsIdentityProvider;
+
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+import static org.apache.hadoop.fs.s3a.Constants.AWS_S3_ACCESS_GRANTS_ENABLED;
+
+
+/**
+ * Test S3 Access Grants configurations.
+ */
+public class TestS3AccessGrantConfiguration extends AbstractHadoopTestBase {
+  /**
+   * This credential provider will be attached to any client
+   * that has been configured with the S3 Access Grants plugin.
+   * {@link software.amazon.awssdk.s3accessgrants.plugin.S3AccessGrantsPlugin}.
+   */
+  public static final String 
S3_ACCESS_GRANTS_EXPECTED_CREDENTIAL_PROVIDER_CLASS =
+  S3AccessGrantsIdentityProvider.class.getName();
+
+  @Test
+  public void testS3AccessGrantsEnabled() throws IOException, 
URISyntaxException {
+// Feature is explicitly enabled
+AwsClient s3AsyncClient = getAwsClient(createConfig(true), true);
+Assert.assertEquals(

Review Comment:
   1. I prefer AssertJ asserts with useful .description() values in new test 
suites. AssertEquals is not as bad as the others: it does generate a message, 
but more details are good.
   
   2. the same assert and operation is being used everywhere. Factor it out 
into a method and call it where needed.
   



##
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/DefaultS3ClientFactory.java:
##
@@ -401,4 +411,20 @@ private static Region getS3RegionFromEndpoint(final String 
endpoint,
 return Region.of(AWS_S3_DEFAULT_REGION);
   }
 
+  private static , 
ClientT> void
+  applyS3AccessGrantsConfigurations(BuilderT builder, Configuration conf) {
+if (!conf.getBoolean(AWS_S3_ACCESS_GRANTS_ENABLED, false)){

Review Comment:
   define and use a constant `AWS_S3_ACCESS_GRANTS_ENABLED` here.
   
   makes it easier to see/change what the default is in future.



##
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AccessGrantConfiguration.java:
##
@@ -0,0 +1,107 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package 

Re: [PR] HDFS-17146. [Addendum] Enhance test readability with assertJ. [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6595:
URL: https://github.com/apache/hadoop/pull/6595#issuecomment-1983993404

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 215m 22s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/9/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 308m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeRetryCache |
   |   | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6595 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 908072e384a3 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6c18e7912316a868d950d08f8525bd559629fa82 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/9/testReport/ |
   | Max. process+thread count | 4655 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message 

[jira] [Commented] (HADOOP-19050) Add S3 Access Grants Support in S3A

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824472#comment-17824472
 ] 

ASF GitHub Bot commented on HADOOP-19050:
-

steveloughran commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1983988376

   Test results. thanks for these. you only need failures. (we don't care about 
successes unless something has got very slow)
   
   * rebase on trunk; you need "HADOOP-19057. S3A: Landsat bucket used in tests 
no longer accessible"
   * A lot of the tests are cases you can turn off, as is done for third-party 
stores. Look at testing.md and in S3ATestUtils to see how things are skipped
   
   For example, for ITestS3ATemporaryCredentials and delegation; set 
test.fs.s3a.sts.enabled false
   There's something similar for ACLs.
   
   




> Add S3 Access Grants Support in S3A
> ---
>
> Key: HADOOP-19050
> URL: https://issues.apache.org/jira/browse/HADOOP-19050
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Jason Han
>Assignee: Jason Han
>Priority: Minor
>  Labels: pull-request-available
>
> Add support for S3 Access Grants 
> (https://aws.amazon.com/s3/features/access-grants/) in S3A.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19050. Add S3 Access Grants Support in S3A [hadoop]

2024-03-07 Thread via GitHub


steveloughran commented on PR #6544:
URL: https://github.com/apache/hadoop/pull/6544#issuecomment-1983988376

   Test results. thanks for these. you only need failures. (we don't care about 
successes unless something has got very slow)
   
   * rebase on trunk; you need "HADOOP-19057. S3A: Landsat bucket used in tests 
no longer accessible"
   * A lot of the tests are cases you can turn off, as is done for third-party 
stores. Look at testing.md and in S3ATestUtils to see how things are skipped
   
   For example, for ITestS3ATemporaryCredentials and delegation; set 
test.fs.s3a.sts.enabled false
   There's something similar for ACLs.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17146. [Addendum] Enhance test readability with assertJ. [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6595:
URL: https://github.com/apache/hadoop/pull/6595#issuecomment-1983953951

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 25s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |   5m 31s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/8/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m  6s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  21m 30s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 228m 50s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/8/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 26s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 295m 34s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestEncryptionZonesWithKMS |
   |   | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestErasureCodingPoliciesWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6595 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 75aec88bb203 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6c18e7912316a868d950d08f8525bd559629fa82 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/8/testReport/ |
   | Max. process+thread count | 4253 (vs. ulimit of 5500) |
   | modules | C: 

Re: [PR] HADOOP-19050. SDK Add Support for AWS S3 Access Grants [hadoop]

2024-03-07 Thread via GitHub


steveloughran closed pull request #6507: HADOOP-19050. SDK Add Support for AWS 
S3 Access Grants
URL: https://github.com/apache/hadoop/pull/6507


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17380. FsImageValidation: remove inaccessible nodes. [hadoop]

2024-03-07 Thread via GitHub


szetszwo commented on PR #6549:
URL: https://github.com/apache/hadoop/pull/6549#issuecomment-1983883646

   @Hexiaoqiao , thanks for review this!
   
   > ... We should recover from other fsimages first if one fsimage file is 
corrupted ...
   
   This may not be acceptable in some use cases since the newly created files 
will be lost (i.e. data loss) if we recover from an earlier fsimage.  If we 
remove the inaccessible inodes, we won't lose any files (i.e. no data loss).
   
   BTW, this is just a tool to fix fsimages.  Users may choose not to use it if 
they are fine to recover from an earlier fsimage.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17408. Reduce quota calculation times in FSDirRenameOp. [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#issuecomment-1983825438

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 10s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 33s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 46 unchanged - 1 
fixed = 46 total (was 47)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 35s |  |  the patch passed  |
   | -1 :x: |  shadedclient  |  40m 26s |  |  patch has errors when building 
and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 302m 57s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6608/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 52s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 458m 11s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDeleteRace |
   |   | hadoop.hdfs.web.TestFSMainOperationsWebHdfs |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshotWithRandomECPolicy |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   |   | hadoop.hdfs.TestTrashWithEncryptionZones |
   |   | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractAppend |
   |   | hadoop.hdfs.server.namenode.TestNNThroughputBenchmark |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.fs.viewfs.TestViewFsHdfs |
   |   | hadoop.fs.contract.hdfs.TestHDFSContractRename |
   |   | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.server.namenode.TestReencryption |
   |   | hadoop.fs.viewfs.TestViewFileSystemLinkFallback |
   |   | hadoop.hdfs.TestDFSRename |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport |
   |   | hadoop.cli.TestAclCLI |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
   |   | hadoop.fs.TestWebHdfsFileContextMainOperations |
   |   | hadoop.hdfs.web.TestWebHDFSAcl |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestFileAppend3 |
   |   | 

Re: [PR] HDFS-17146. [Addendum] Enhance test readability with assertJ. [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6595:
URL: https://github.com/apache/hadoop/pull/6595#issuecomment-1983822417

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  40m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 21s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  41m 20s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 291m 14s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 57s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 448m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6595 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6d1ecdf07d36 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 53ff41a79cd2904c76053cfca956b0511270b1ec |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/7/testReport/ |
   | Max. process+thread count | 2596 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was 

Re: [PR] YARN-11626. Optimize ResourceManager's operations on Zookeeper metadata [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6616:
URL: https://github.com/apache/hadoop/pull/6616#issuecomment-1983768684

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  18m 24s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  47m 45s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 58s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  38m 15s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 53s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 45s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 44s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6616/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5)  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  39m  2s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 108m  3s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 269m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6616/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6616 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux ba39eed75d7a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ce613be2e53778022e910c86be78f0d8c6ba1ec8 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6616/1/testReport/ |
   | Max. process+thread count | 927 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 

Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#issuecomment-1983704408

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  44m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  33m 16s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/11/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 5 new + 10 unchanged - 0 
fixed = 15 total (was 10)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   0m 26s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt)
 |  hadoop-tools_hadoop-azure-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 15 
unchanged - 0 fixed = 16 total (was 15)  |
   | -1 :x: |  javadoc  |   0m 25s | 
[/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/11/artifact/out/results-javadoc-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt)
 |  hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08 
with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08 generated 1 new + 15 
unchanged - 0 fixed = 16 total (was 15)  |
   | -1 :x: |  spotbugs  |   1m  7s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/11/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 2 new + 0 unchanged - 0 fixed = 2 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  33m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 14s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 127m 19s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-azure |
   |  |  Private method 
org.apache.hadoop.fs.azurebfs.services.AbfsClient.createRequestUrl(URL, String) 
is never called  At AbfsClient.java:never 

[jira] [Updated] (HADOOP-19104) S3A HeaderProcessing to process all metadata entries of HEAD response

2024-03-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-19104:

Description: 
S3A HeaderProcessing builds up an incomplete list of headers as its mapping of 
md to header. entries omits headers including
x-amz-server-side-encryption-aws-kms-key-id

proposed
* review all headers which are stripped from "raw" responses and mapped into 
headers
* make sure result of headers matches v1; looks like etags are different
* make sure x-amz-server-side-encryption-aws-kms-key-id gets back
* plus new checksum values


v1 sdk

{code}

# file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
header.Content-Length="524671"
header.Content-Type="binary/octet-stream"
header.ETag="3e39531220fbd3747d32cf93a79a7a0c"
header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
header.x-amz-server-side-encryption="AES256"

{code}

v2 SDK. note how etag is now double quoted.

{code}

# file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
header.Content-Length="524671"
header.Content-Type="binary/octet-stream"
header.ETag=""3e39531220fbd3747d32cf93a79a7a0c""
header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
header.x-amz-server-side-encryption="AES256"

{code}


  was:
S3A HeaderProcessing builds up an incomplete list of headers as its mapping of 
md to header. entries omits headers including
x-amz-server-side-encryption-aws-kms-key-id

proposed
* review all headers which are stripped from "raw" responses and mapped into 
headers
* make sure result of headers matches v1; looks like etags are different
* make sure x-amz-server-side-encryption-aws-kms-key-id gets back
* plus new checksum values

{code}
v1 sdk

{code}

# file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
header.Content-Length="524671"
header.Content-Type="binary/octet-stream"
header.ETag="3e39531220fbd3747d32cf93a79a7a0c"
header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
header.x-amz-server-side-encryption="AES256"

{code}

v2 SDK. note how etag is now double quoted.

{code}

# file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
header.Content-Length="524671"
header.Content-Type="binary/octet-stream"
header.ETag=""3e39531220fbd3747d32cf93a79a7a0c""
header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
header.x-amz-server-side-encryption="AES256"

{code}



> S3A HeaderProcessing to process all metadata entries of HEAD response
> -
>
> Key: HADOOP-19104
> URL: https://issues.apache.org/jira/browse/HADOOP-19104
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Priority: Major
>
> S3A HeaderProcessing builds up an incomplete list of headers as its mapping 
> of md to header. entries omits headers including
> x-amz-server-side-encryption-aws-kms-key-id
> proposed
> * review all headers which are stripped from "raw" responses and mapped into 
> headers
> * make sure result of headers matches v1; looks like etags are different
> * make sure x-amz-server-side-encryption-aws-kms-key-id gets back
> * plus new checksum values
> v1 sdk
> {code}
> # file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
> header.Content-Length="524671"
> header.Content-Type="binary/octet-stream"
> header.ETag="3e39531220fbd3747d32cf93a79a7a0c"
> header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
> header.x-amz-server-side-encryption="AES256"
> {code}
> v2 SDK. note how etag is now double quoted.
> {code}
> # file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
> header.Content-Length="524671"
> header.Content-Type="binary/octet-stream"
> header.ETag=""3e39531220fbd3747d32cf93a79a7a0c""
> header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
> header.x-amz-server-side-encryption="AES256"
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-19104) S3A HeaderProcessing to process all metadata entries of HEAD response

2024-03-07 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-19104:
---

 Summary: S3A HeaderProcessing to process all metadata entries of 
HEAD response
 Key: HADOOP-19104
 URL: https://issues.apache.org/jira/browse/HADOOP-19104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.4.0
Reporter: Steve Loughran


S3A HeaderProcessing builds up an incomplete list of headers as its mapping of 
md to header. entries omits headers including
x-amz-server-side-encryption-aws-kms-key-id

proposed
* review all headers which are stripped from "raw" responses and mapped into 
headers
* make sure result of headers matches v1; looks like etags are different
* make sure x-amz-server-side-encryption-aws-kms-key-id gets back
* plus new checksum values

{code}
v1 sdk

{code}

# file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
header.Content-Length="524671"
header.Content-Type="binary/octet-stream"
header.ETag="3e39531220fbd3747d32cf93a79a7a0c"
header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
header.x-amz-server-side-encryption="AES256"

{code}

v2 SDK. note how etag is now double quoted.

{code}

# file: s3a://noaa-cors-pds/raw/2024/001/akse/AKSE001x.24_.gz
header.Content-Length="524671"
header.Content-Type="binary/octet-stream"
header.ETag=""3e39531220fbd3747d32cf93a79a7a0c""
header.Last-Modified="Tue Jan 02 00:15:13 GMT 2024"
header.x-amz-server-side-encryption="AES256"

{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#issuecomment-1983568709

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 36s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  21m 43s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 10s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/10/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 16 new + 10 unchanged - 0 
fixed = 26 total (was 10)  |
   | +1 :green_heart: |  mvnsite  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 53s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 24s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  85m 31s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6314 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux dc048dbadcf0 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ea921724efe1e607e514cb64640460fd34841872 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/10/testReport/ |
   | Max. process+thread count | 552 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 

Re: [PR] HDFS-17391:Adjust the checkpoint io buffer size to the chunk size [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6594:
URL: https://github.com/apache/hadoop/pull/6594#issuecomment-1983556930

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 20s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 4 unchanged - 4 
fixed = 4 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 43s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 197m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6594/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 18s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.tools.TestDFSAdmin |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6594/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6594 |
   | JIRA Issue | HDFS-17391 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux f97425500da8 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 595e396fa499ab7b0a67ad1d9f4d4d762a14e260 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6594/5/testReport/ |
   | Max. process+thread count | 4857 (vs. 

[jira] [Commented] (HADOOP-18708) AWS SDK V2 - Implement CSE

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824421#comment-17824421
 ] 

ASF GitHub Bot commented on HADOOP-18708:
-

ahmarsuhail commented on PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#issuecomment-1983551593

   Thanks @steveloughran . While looking at the test failures, I found a couple 
of issues with S3 Encryption Client, opened 
https://github.com/aws/amazon-s3-encryption-client-java/issues/200 and 
https://github.com/aws/amazon-s3-encryption-client-java/issues/201 there. 
   
   `ITestS3AContractVectoredRead.testEOFRanges416Handling` fails because S3EC 
does not throw an exception is range is greater than EOF. 
   
   `ITestConnectionTimeouts.testGeneratePoolTimeouts` fails because S3EC uses 
the async client to make requests, and the error thrown on a timeout is 
different. S3Client will throw
   
   ```
   software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
HTTP request
   Caused by: 
software.amazon.awssdk.thirdparty.org.apache.http.conn.ConnectTimeoutException
   Caused by: java.net.SocketTimeoutException: connect timed out
   ```
   
   which gets translated to `ConnectTimeoutException` in 
S3AUtils.translateInterruptedException(). But AsynClient throws:
   
   ```
   software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
HTTP request: 
   Caused by: java.lang.Throwable: Acquire operation took longer than the 
configured maximum time.
   Causes by: java.util.concurrent.TimeoutException: Acquire operation took 
longer than 10 milliseconds.
   ```
   
   which currently doesn't get translated properly, so need to add some 
handling there. 
   
   Will wait for the fixes to S3EC, and then update error translation + address 
comments. 
   
   Also asked about adding it to bundle. Looking at S3EC dependencies, I think 
it may cause us some issues, but not sure:
   
   ```
   [INFO] +- 
software.amazon.encryption.s3:amazon-s3-encryption-client-java:jar:3.1.1:provided
   [INFO] |  +- software.amazon.awssdk:s3:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:aws-xml-protocol:jar:2.20.38:provided
   [INFO] |  |  |  \- 
software.amazon.awssdk:aws-query-protocol:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:protocol-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:arns:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:profiles:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:crt-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:sdk-core:jar:2.20.38:provided
   [INFO] |  |  |  \- org.reactivestreams:reactive-streams:jar:1.0.3:provided
   [INFO] |  |  +- software.amazon.awssdk:auth:jar:2.20.38:provided
   [INFO] |  |  |  \- software.amazon.eventstream:eventstream:jar:1.0.1:provided
   [INFO] |  |  +- software.amazon.awssdk:http-client-spi:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:regions:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:annotations:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:utils:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:aws-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:metrics-spi:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:json-utils:jar:2.20.38:provided
   [INFO] |  |  |  \- 
software.amazon.awssdk:third-party-jackson-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:endpoints-spi:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:apache-client:jar:2.20.38:provided
   [INFO] |  |  \- software.amazon.awssdk:netty-nio-client:jar:2.20.38:provided
   [INFO] |  +- joda-time:joda-time:jar:2.8.1:provided
   [INFO] |  \- commons-logging:commons-logging:jar:1.2:provided
   ```
   
   What do you think?
   
   




> AWS SDK V2 - Implement CSE
> --
>
> Key: HADOOP-18708
> URL: https://issues.apache.org/jira/browse/HADOOP-18708
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Assignee: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> S3 Encryption client for SDK V2 is now available, so add client side 
> encryption back in. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-18708. AWS SDK V2 - Implement CSE [hadoop]

2024-03-07 Thread via GitHub


ahmarsuhail commented on PR #6164:
URL: https://github.com/apache/hadoop/pull/6164#issuecomment-1983551593

   Thanks @steveloughran . While looking at the test failures, I found a couple 
of issues with S3 Encryption Client, opened 
https://github.com/aws/amazon-s3-encryption-client-java/issues/200 and 
https://github.com/aws/amazon-s3-encryption-client-java/issues/201 there. 
   
   `ITestS3AContractVectoredRead.testEOFRanges416Handling` fails because S3EC 
does not throw an exception is range is greater than EOF. 
   
   `ITestConnectionTimeouts.testGeneratePoolTimeouts` fails because S3EC uses 
the async client to make requests, and the error thrown on a timeout is 
different. S3Client will throw
   
   ```
   software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
HTTP request
   Caused by: 
software.amazon.awssdk.thirdparty.org.apache.http.conn.ConnectTimeoutException
   Caused by: java.net.SocketTimeoutException: connect timed out
   ```
   
   which gets translated to `ConnectTimeoutException` in 
S3AUtils.translateInterruptedException(). But AsynClient throws:
   
   ```
   software.amazon.awssdk.core.exception.SdkClientException: Unable to execute 
HTTP request: 
   Caused by: java.lang.Throwable: Acquire operation took longer than the 
configured maximum time.
   Causes by: java.util.concurrent.TimeoutException: Acquire operation took 
longer than 10 milliseconds.
   ```
   
   which currently doesn't get translated properly, so need to add some 
handling there. 
   
   Will wait for the fixes to S3EC, and then update error translation + address 
comments. 
   
   Also asked about adding it to bundle. Looking at S3EC dependencies, I think 
it may cause us some issues, but not sure:
   
   ```
   [INFO] +- 
software.amazon.encryption.s3:amazon-s3-encryption-client-java:jar:3.1.1:provided
   [INFO] |  +- software.amazon.awssdk:s3:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:aws-xml-protocol:jar:2.20.38:provided
   [INFO] |  |  |  \- 
software.amazon.awssdk:aws-query-protocol:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:protocol-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:arns:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:profiles:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:crt-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:sdk-core:jar:2.20.38:provided
   [INFO] |  |  |  \- org.reactivestreams:reactive-streams:jar:1.0.3:provided
   [INFO] |  |  +- software.amazon.awssdk:auth:jar:2.20.38:provided
   [INFO] |  |  |  \- software.amazon.eventstream:eventstream:jar:1.0.1:provided
   [INFO] |  |  +- software.amazon.awssdk:http-client-spi:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:regions:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:annotations:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:utils:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:aws-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:metrics-spi:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:json-utils:jar:2.20.38:provided
   [INFO] |  |  |  \- 
software.amazon.awssdk:third-party-jackson-core:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:endpoints-spi:jar:2.20.38:provided
   [INFO] |  |  +- software.amazon.awssdk:apache-client:jar:2.20.38:provided
   [INFO] |  |  \- software.amazon.awssdk:netty-nio-client:jar:2.20.38:provided
   [INFO] |  +- joda-time:joda-time:jar:2.8.1:provided
   [INFO] |  \- commons-logging:commons-logging:jar:1.2:provided
   ```
   
   What do you think?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17364. EC: Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream. [hadoop]

2024-03-07 Thread via GitHub


bbeaudreault commented on PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#issuecomment-1983545666

   @Hexiaoqiao @zhangshuyan0 Thanks for your reviews. I realized there's also 
an ElasticBufferPool in DFSStripedOutputStream. I'm thinking of handling that 
here as well. What do you think?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17364. EC: Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream. [hadoop]

2024-03-07 Thread via GitHub


bbeaudreault commented on code in PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#discussion_r1516181907


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:
##
@@ -3159,6 +3165,21 @@ private void initThreadsNumForStripedReads(int 
numThreads) {
 }
   }
 
+  private void initBufferPoolForStripedReads(boolean useWeakReference) {
+if (STRIPED_READ_BUFFER_POOL != null) {
+  return;
+}
+synchronized (DFSClient.class) {

Review Comment:
   @Hexiaoqiao thanks for review. For this block, it's sort of modeled after 
other examples in DFSClient, such [as initializing the striped read thread 
pool](https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java#L3150).
 I think the idea is that DFSClient could easily be used in multiple threads, 
so we want to avoid double initializing the shared resource.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] Hadoop 18325: ABFS: Add correlated metric support for ABFS operations [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6314:
URL: https://github.com/apache/hadoop/pull/6314#issuecomment-1983540085

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  20m 56s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 11s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/9/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 16 new + 10 unchanged - 0 
fixed = 26 total (was 10)  |
   | +1 :green_heart: |  mvnsite  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   0m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 22s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  86m 35s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6314 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint 
|
   | uname | Linux ef2b32bdae58 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f51f756cc0107292b5b97367618c84274c70a7c3 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6314/9/testReport/ |
   | Max. process+thread count | 749 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 

Re: [PR] HDFS-17364. EC: Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream. [hadoop]

2024-03-07 Thread via GitHub


bbeaudreault commented on code in PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#discussion_r1516179674


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java:
##
@@ -530,6 +530,9 @@ interface StripedRead {
  * span 6 DNs, so this default value accommodates 3 read streams
  */
 int THREADPOOL_SIZE_DEFAULT = 18;
+
+String WEAK_REF_BUFFER_POOL_KEY = PREFIX + 
"bufferpool.weak.references.enabled";
+boolean WEAK_REF_BUFFER_POOL_DEFAULT = false;

Review Comment:
   Will do



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17368. HA: Standby should exit safemode when resources are from low available [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on code in PR #6518:
URL: https://github.com/apache/hadoop/pull/6518#discussion_r1516122704


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java:
##
@@ -1582,6 +1582,10 @@ void startStandbyServices(final Configuration conf, 
boolean isObserver)
   standbyCheckpointer = new StandbyCheckpointer(conf, this);
   standbyCheckpointer.start();
 }
+if (isNoManualAndResourceLowSafeMode()) {
+  LOG.info("Standby should not enter safe mode when resources are low, 
exiting safe mode.");
+  leaveSafeMode(false);

Review Comment:
   It is reasonable at first glance, not think carefully, any cases to trigger 
Standby leave safemode untimely? Thanks.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17364. EC: Configurably use WeakReferencedElasticByteBufferPool in DFSStripedInputStream. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on code in PR #6514:
URL: https://github.com/apache/hadoop/pull/6514#discussion_r1516099526


##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/client/HdfsClientConfigKeys.java:
##
@@ -530,6 +530,9 @@ interface StripedRead {
  * span 6 DNs, so this default value accommodates 3 read streams
  */
 int THREADPOOL_SIZE_DEFAULT = 18;
+
+String WEAK_REF_BUFFER_POOL_KEY = PREFIX + 
"bufferpool.weak.references.enabled";
+boolean WEAK_REF_BUFFER_POOL_DEFAULT = false;

Review Comment:
   Please also add this default config to core-default.xml.



##
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSClient.java:
##
@@ -3159,6 +3165,21 @@ private void initThreadsNumForStripedReads(int 
numThreads) {
 }
   }
 
+  private void initBufferPoolForStripedReads(boolean useWeakReference) {
+if (STRIPED_READ_BUFFER_POOL != null) {
+  return;
+}
+synchronized (DFSClient.class) {

Review Comment:
   What this `synchronized` would like to protect?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17401. EC: Excess internal block may not be able to be deleted correctly when it's stored in fallback storage [hadoop]

2024-03-07 Thread via GitHub


haiyang1987 commented on code in PR #6597:
URL: https://github.com/apache/hadoop/pull/6597#discussion_r1516059534


##
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestReconstructStripedBlocks.java:
##
@@ -575,5 +576,82 @@ public void testReconstructionWithStorageTypeNotEnough() 
throws Exception {
   cluster.shutdown();
 }
   }
+  @Test
+  public void testDeleteOverReplicatedStripedBlock() throws Exception {
+final HdfsConfiguration conf = new HdfsConfiguration();
+conf.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
+conf.setBoolean(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_CONSIDERLOAD_KEY,
+false);
+StorageType[][] st = new StorageType[groupSize + 2][1];
+for (int i = 0;i < st.length-1;i++){
+  st[i] = new StorageType[]{StorageType.SSD};
+}
+st[st.length -1] = new StorageType[]{StorageType.DISK};
+
+cluster = new MiniDFSCluster.Builder(conf).numDataNodes(groupSize + 2)
+.storagesPerDatanode(1)
+.storageTypes(st)
+.build();
+cluster.waitActive();
+DistributedFileSystem fs = cluster.getFileSystem();
+fs.enableErasureCodingPolicy(
+StripedFileTestUtil.getDefaultECPolicy().getName());
+try {
+  fs.mkdirs(dirPath);
+  fs.setErasureCodingPolicy(dirPath,
+  StripedFileTestUtil.getDefaultECPolicy().getName());
+  fs.setStoragePolicy(dirPath, HdfsConstants.ALLSSD_STORAGE_POLICY_NAME);
+  DFSTestUtil.createFile(fs, filePath,
+  cellSize * dataBlocks * 2, (short) 1, 0L);
+  FSNamesystem fsn3 = cluster.getNamesystem();
+  BlockManager bm3 = fsn3.getBlockManager();
+  // stop a dn

Review Comment:
   The first letter should be uppercase~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824381#comment-17824381
 ] 

ASF GitHub Bot commented on HADOOP-15984:
-

hadoop-yetus commented on PR #6606:
URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1983367997

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 75 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   5m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  31m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  26m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |  24m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   2m 40s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   0m 55s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 32s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m  7s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 33s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  
branch/hadoop-client-modules/hadoop-client-api no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 33s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  
branch/hadoop-client-modules/hadoop-client-integration-tests no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  
branch/hadoop-cloud-storage-project/hadoop-cloud-storage no spotbugs output 
file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  36m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  37m 21s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | 

Re: [PR] HADOOP-15984. Update jersey from 1.19 to 2.x [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6606:
URL: https://github.com/apache/hadoop/pull/6606#issuecomment-1983367997

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  1s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 75 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 30s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  18m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |  16m 53s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   5m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  31m  1s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |  26m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |  24m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 37s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   2m 40s | 
[/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-common-project_hadoop-common-warnings.html)
 |  hadoop-common-project/hadoop-common in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   0m 55s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-httpfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-httpfs in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m 32s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 1 extant spotbugs 
warnings.  |
   | -1 :x: |  spotbugs  |   1m  7s | 
[/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6606/5/artifact/out/branch-spotbugs-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-applications_hadoop-yarn-services_hadoop-yarn-services-core-warnings.html)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core
 in trunk has 1 extant spotbugs warnings.  |
   | +0 :ok: |  spotbugs  |   0m 33s |  |  
branch/hadoop-client-modules/hadoop-client no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  
branch/hadoop-client-modules/hadoop-client-api no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 33s |  |  
branch/hadoop-client-modules/hadoop-client-runtime no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  
branch/hadoop-client-modules/hadoop-client-check-invariants no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 34s |  |  
branch/hadoop-client-modules/hadoop-client-minicluster no spotbugs output file 
(spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  
branch/hadoop-client-modules/hadoop-client-integration-tests no spotbugs output 
file (spotbugsXml.xml)  |
   | +0 :ok: |  spotbugs  |   0m 35s |  |  
branch/hadoop-cloud-storage-project/hadoop-cloud-storage no spotbugs output 
file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  36m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  37m 21s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m 27s |  |  Maven dependency ordering for patch  |
   | -1 :x: |  mvninstall  |   0m 41s | 

Re: [PR] YARN-11626. Optimize ResourceManager's operations on Zookeeper metadata [hadoop]

2024-03-07 Thread via GitHub


XbaoWu commented on PR #6577:
URL: https://github.com/apache/hadoop/pull/6577#issuecomment-1983324064

   > Hi @XbaoWu Please submit PR to trunk first, if approved and committed to 
trunk, then backport to other active branches if necessary.
   
   Okay, thank you for your reminder


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[PR] YARN-11626. Optimize ResourceManager's operations on Zookeeper metadata [hadoop]

2024-03-07 Thread via GitHub


XbaoWu opened a new pull request, #6616:
URL: https://github.com/apache/hadoop/pull/6616

   
   
   ### Description of PR
   
   For more information about this PR, please refer to the following issue:
   [YARN-11626](https://issues.apache.org/jira/browse/YARN-11626) Optimization 
of the safeDelete operation in ZKRMStateStore
   
   The NoNodeException clearly indicates that the Znode no longer exists, so if 
we check again and find that the node does not actually exist, we can  safely 
ignore this exception to avoid triggering a larger impact on the cluster caused 
by ResourceManager failover.
   ### How was this patch tested?
   
   add TestCheckRemoveZKNodeRMStateStore.testSafeDeleteZKNode()
   
   ### For code changes:
   
   - [x] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [x] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [x] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17380. FsImageValidation: remove inaccessible nodes. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6549:
URL: https://github.com/apache/hadoop/pull/6549#issuecomment-1983209877

   Hi @szetszwo , Thanks for your works. I am not sure if this is one safe 
operations. Now it keeps at least 2 checkpoints by 
default(dfs.namenode.num.checkpoints.retained), it configs to more than default 
value in production env generally. We should recover from other fsimages first 
if one fsimage file is corrupted IMO rather than remove inaccessible nodes then 
recover. I am afraid this will be not acceptable in most case. Thanks again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6612:
URL: https://github.com/apache/hadoop/pull/6612#issuecomment-1983180401

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 27s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   2m 14s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 37s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  branch-3.3 passed  |
   | -1 :x: |  spotbugs  |   1m 26s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-client-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs-client in branch-3.3 has 2 extant spotbugs 
warnings.  |
   | +1 :green_heart: |  shadedclient  |  23m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 21s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 33s |  |  hadoop-hdfs-project: The 
patch generated 0 new + 249 unchanged - 3 fixed = 249 total (was 252)  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 49s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 172m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 276m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.protocol.TestBlockListAsLongs |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.TestLeaseRecovery2 |
   |   | hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   |   | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6612 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 8029685ad3de 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / 5d4a6ed957d86f85618f70f27d11f6077336b16f |
   | Default Java | Private Build-1.8.0_362-8u372-ga~us1-0ubuntu1~18.04-b09 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/2/testReport/ |
   | Max. process+thread count | 4424 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-client 
hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6612/2/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the 

[jira] [Commented] (HADOOP-19090) Update Protocol Buffers installation to 3.23.4

2024-03-07 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824341#comment-17824341
 ] 

ASF GitHub Bot commented on HADOOP-19090:
-

ayushtkn merged PR #6593:
URL: https://github.com/apache/hadoop/pull/6593




> Update Protocol Buffers installation to 3.23.4
> --
>
> Key: HADOOP-19090
> URL: https://issues.apache.org/jira/browse/HADOOP-19090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We are seeing issues with Java 8 usage of protobuf-java
> See https://issues.apache.org/jira/browse/HADOOP-18197 and comments about
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-19090) Update Protocol Buffers installation to 3.23.4

2024-03-07 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HADOOP-19090.
---
Fix Version/s: 3.4.1
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Update Protocol Buffers installation to 3.23.4
> --
>
> Key: HADOOP-19090
> URL: https://issues.apache.org/jira/browse/HADOOP-19090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We are seeing issues with Java 8 usage of protobuf-java
> See https://issues.apache.org/jira/browse/HADOOP-18197 and comments about
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-19090) Update Protocol Buffers installation to 3.23.4

2024-03-07 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-19090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824343#comment-17824343
 ] 

Ayush Saxena commented on HADOOP-19090:
---

Committed to trunk.

Thanx [~pj.fanning] for the contribution & [~hexiaoqiao] for the review!!!

> Update Protocol Buffers installation to 3.23.4
> --
>
> Key: HADOOP-19090
> URL: https://issues.apache.org/jira/browse/HADOOP-19090
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.4.0, 3.3.9
>Reporter: PJ Fanning
>Assignee: PJ Fanning
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We are seeing issues with Java 8 usage of protobuf-java
> See https://issues.apache.org/jira/browse/HADOOP-18197 and comments about
> java.lang.NoSuchMethodError: 
> java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HADOOP-19090. Use protobuf-java 3.23.4. [hadoop]

2024-03-07 Thread via GitHub


ayushtkn merged PR #6593:
URL: https://github.com/apache/hadoop/pull/6593


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] YARN-11626. Optimize ResourceManager's operations on Zookeeper metadata [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6577:
URL: https://github.com/apache/hadoop/pull/6577#issuecomment-1983061802

   Hi @XbaoWu Please submit PR to trunk first, if approved and committed to 
trunk, then backport to other active branches if necessary.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17408. Reduce quota calculation times in FSDirRenameOp. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#issuecomment-1983018540

   Some nit point: It will be helpful for reviewers when add some description 
about this improvement background and target. If offer benchmark result will be 
better.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17408. Reduce quota calculation times in FSDirRenameOp. [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6608:
URL: https://github.com/apache/hadoop/pull/6608#issuecomment-1983003619

   Thanks @ThinkerLei for your works. It's great performance improvement! The 
last CI didn't run clean, try to trigger it again. Let's wait what it will say.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17146. [Addendum] Enhance test readability with assertJ. [hadoop]

2024-03-07 Thread via GitHub


hadoop-yetus commented on PR #6595:
URL: https://github.com/apache/hadoop/pull/6595#issuecomment-1982845035

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 22s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_392-8u392-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  20m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 206m  8s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 294m 12s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestLargeBlockReport |
   |   | hadoop.hdfs.protocol.TestBlockListAsLongs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.44 ServerAPI=1.44 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/6595 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 92fc8280315c 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 
15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8a47b5fee635b96071b99ac3b460e852cf25a6d5 |
   | Default Java | Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/6/testReport/ |
   | Max. process+thread count | 3963 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6595/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was 

Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file (… [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao commented on PR #6613:
URL: https://github.com/apache/hadoop/pull/6613#issuecomment-1982836474

   Hi @ritegarg Thanks for your PR. branch-3.2 has been EOL. We should not 
submit PR to this branch. I will close this one. Please feel free to reopen it 
if something I missed. Thanks again.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



Re: [PR] HDFS-17299. Adding rack failure tolerance when creating a new file (… [hadoop]

2024-03-07 Thread via GitHub


Hexiaoqiao closed pull request #6613: HDFS-17299. Adding rack failure tolerance 
when creating a new file  (…
URL: https://github.com/apache/hadoop/pull/6613


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org