[GitHub] [hadoop] hadoop-yetus commented on pull request #4383: HADOOP-18258. Merging of S3A Audit Logs

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#issuecomment-1438050586

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 115m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4383 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux 4a270bfa10c8 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3ab4e4fde275865cb699545d739e8b44c646f0e7 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/16/testReport/ |
   | Max. process+thread count | 542 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/16/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the mess

[jira] [Commented] (HADOOP-18258) Merging of S3A Audit Logs

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691461#comment-17691461
 ] 

ASF GitHub Bot commented on HADOOP-18258:
-

hadoop-yetus commented on PR #4383:
URL: https://github.com/apache/hadoop/pull/4383#issuecomment-1438050586

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  48m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  26m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 30s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 115m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/16/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4383 |
   | Optional Tests | dupname asflicense codespell detsecrets xmllint compile 
javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux 4a270bfa10c8 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3ab4e4fde275865cb699545d739e8b44c646f0e7 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/16/testReport/ |
   | Max. process+thread count | 542 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4383/16/console |
   | ve

[jira] [Created] (HADOOP-18637) S3A to support upload of files greater than 2 GB using DiskBlocks

2023-02-21 Thread Harshit Gupta (Jira)
Harshit Gupta created HADOOP-18637:
--

 Summary: S3A to support upload of files greater than 2 GB using 
DiskBlocks
 Key: HADOOP-18637
 URL: https://issues.apache.org/jira/browse/HADOOP-18637
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Harshit Gupta
Assignee: Harshit Gupta


Use S3A Diskblocks to support the upload of files greater than 2 GB using 
DiskBlocks. Currently, the max upload size of a single block is ~2GB. 

cc: [~mthakur] [~ste...@apache.org] [~mehakmeet] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sreeb-msft commented on a diff in pull request #5399: HADOOP-18632: [ABFS] Customize and optimize timeouts made based on each separate request

2023-02-21 Thread via GitHub


sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1112771893


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {

Review Comment:
   Understood, thanks! 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18632) ABFS: Customize and optimize timeouts made based on each separate request

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691470#comment-17691470
 ] 

ASF GitHub Bot commented on HADOOP-18632:
-

sreeb-msft commented on code in PR #5399:
URL: https://github.com/apache/hadoop/pull/5399#discussion_r1112771893


##
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/TimeoutOptimizer.java:
##
@@ -0,0 +1,227 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services;
+
+import org.apache.hadoop.fs.azurebfs.AbfsConfiguration;
+import org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys;
+import org.apache.hadoop.fs.azurebfs.constants.HttpQueryParams;
+import org.apache.http.client.utils.URIBuilder;
+
+import java.net.MalformedURLException;
+import java.net.URISyntaxException;
+import java.net.URL;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_TIMEOUT;
+
+public class TimeoutOptimizer {
+AbfsConfiguration abfsConfiguration;
+private URL url;
+private AbfsRestOperationType opType;
+private ExponentialRetryPolicy retryPolicy;
+private int requestTimeout;
+private int readTimeout = -1;
+private int connTimeout = -1;
+private int maxReqTimeout;
+private int timeoutIncRate;
+private boolean shouldOptimizeTimeout;
+
+public TimeoutOptimizer(URL url, AbfsRestOperationType opType, 
ExponentialRetryPolicy retryPolicy, AbfsConfiguration abfsConfiguration) {
+this.url = url;
+this.opType = opType;
+if (opType != null) {
+this.retryPolicy = retryPolicy;
+this.abfsConfiguration = abfsConfiguration;
+if 
(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS) == null) {
+this.shouldOptimizeTimeout = false;
+}
+else {
+this.shouldOptimizeTimeout = 
Boolean.parseBoolean(abfsConfiguration.get(ConfigurationKeys.AZURE_OPTIMIZE_TIMEOUTS));
+}
+if (this.shouldOptimizeTimeout) {
+this.maxReqTimeout = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_MAX_REQUEST_TIMEOUT));
+this.timeoutIncRate = 
Integer.parseInt(abfsConfiguration.get(ConfigurationKeys.AZURE_REQUEST_TIMEOUT_INCREASE_RATE));
+initTimeouts();
+updateUrl();
+}
+
+} else {
+this.shouldOptimizeTimeout = false;
+}
+}
+
+public void updateRetryTimeout(int retryCount) {
+if (!this.shouldOptimizeTimeout) {
+return;
+}
+
+// update all timeout values
+updateTimeouts(retryCount);
+updateUrl();
+}
+
+public URL getUrl() {
+return url;
+}
+public boolean getShouldOptimizeTimeout() { return 
this.shouldOptimizeTimeout; }
+
+public int getRequestTimeout() { return requestTimeout; }
+
+public int getReadTimeout() {
+return readTimeout;
+}
+
+public int getReadTimeout(final int defaultTimeout) {
+if (readTimeout != -1 && shouldOptimizeTimeout) {
+return readTimeout;
+}
+return defaultTimeout;
+}
+
+public int getConnTimeout() {
+return connTimeout;
+}
+
+public int getConnTimeout(final int defaultTimeout) {
+if (connTimeout == -1) {
+return defaultTimeout;
+}
+return connTimeout;
+}
+
+private void initTimeouts() {
+if (!shouldOptimizeTimeout) {
+requestTimeout = -1;
+readTimeout = -1;
+connTimeout = -1;
+return;
+}
+
+String query = url.getQuery();
+int timeoutPos = query.indexOf("timeout");
+if (timeoutPos < 0) {

Review Comment:
   Understood, thanks! 





> ABFS: Customize and optimize timeouts made based on each separate request
> -
>
> Key: HADOOP-18632
> URL: https://issues.apache.org/jira/browse/HADOOP-18632
> 

[jira] [Commented] (HADOOP-18622) Upgrade ant to 1.10.13

2023-02-21 Thread Aleksandr Nikolaev (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691476#comment-17691476
 ] 

Aleksandr Nikolaev commented on HADOOP-18622:
-

[~ayushtkn]  could you make a decision on tiket and close it? I haven't 
received any response from [~ayushtkn] 

> Upgrade ant to 1.10.13
> --
>
> Key: HADOOP-18622
> URL: https://issues.apache.org/jira/browse/HADOOP-18622
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Ashutosh Gupta
>Priority: Major
>  Labels: pull-request-available
> Attachments: hadoop_dep.log
>
>
> lnerabilities reported in org.apache.ant:ant:1.10.11
>  * 
> [CVE-2022-23437|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23437]
>  * 
> [CVE-2020-14338|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14338]
> suggested: org.apache.ant:ant ~> 1.10.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18637) S3A to support upload of files greater than 2 GB using DiskBlocks

2023-02-21 Thread Daniel Carl Jones (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Carl Jones updated HADOOP-18637:
---
Component/s: fs/s3

> S3A to support upload of files greater than 2 GB using DiskBlocks
> -
>
> Key: HADOOP-18637
> URL: https://issues.apache.org/jira/browse/HADOOP-18637
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Harshit Gupta
>Assignee: Harshit Gupta
>Priority: Major
>
> Use S3A Diskblocks to support the upload of files greater than 2 GB using 
> DiskBlocks. Currently, the max upload size of a single block is ~2GB. 
> cc: [~mthakur] [~ste...@apache.org] [~mehakmeet] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hfutatzhanghb opened a new pull request, #5419: HDFS-16928. Both getCurrentEditLogTxid and getEditsFromTxid should be OperationCategory.WRITE

2023-02-21 Thread via GitHub


hfutatzhanghb opened a new pull request, #5419:
URL: https://github.com/apache/hadoop/pull/5419

   please see https://issues.apache.org/jira/browse/HDFS-16928


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18637) S3A to support upload of files greater than 2 GB using DiskBlocks

2023-02-21 Thread Harshit Gupta (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691517#comment-17691517
 ] 

Harshit Gupta commented on HADOOP-18637:


S3ABlockOutputStream creates three different types of DataBlock depending upon 
the {{fs.s3a.fast.upload.buffer}} which defaults to disk, we can create an 
empty file for the same size and limit the Buffer size to {{Integer.MAX_VALUE}} 
. *For other buffer types should we deny uploads larger than 2 Gigs or should 
we add the support there as well?* like for {{ByteArrayBlock}} which writes 
directly to the {{S3AByteArrayOutputStream}} which will be again initialized 
with {{Integer.MAX_Value}} .The same goes for {{ByteBufferBlock}} as well. One 
thing to make sure of here is that it's never gonna write something larger than 
{{Integer.MAX_VALUE}} as the calling function to write has the signature 
{{public synchronized void write(byte[] source, int offset, int len)}} 
(S3ABlockOutputStream).

*This is just for compatibility with non-AWS s3 stores.*

> S3A to support upload of files greater than 2 GB using DiskBlocks
> -
>
> Key: HADOOP-18637
> URL: https://issues.apache.org/jira/browse/HADOOP-18637
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Harshit Gupta
>Assignee: Harshit Gupta
>Priority: Major
>
> Use S3A Diskblocks to support the upload of files greater than 2 GB using 
> DiskBlocks. Currently, the max upload size of a single block is ~2GB. 
> cc: [~mthakur] [~ste...@apache.org] [~mehakmeet] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-18622) Upgrade ant to 1.10.13

2023-02-21 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena reassigned HADOOP-18622:
-

Assignee: Aleksandr Nikolaev  (was: Ashutosh Gupta)

> Upgrade ant to 1.10.13
> --
>
> Key: HADOOP-18622
> URL: https://issues.apache.org/jira/browse/HADOOP-18622
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: pull-request-available
> Attachments: hadoop_dep.log
>
>
> lnerabilities reported in org.apache.ant:ant:1.10.11
>  * 
> [CVE-2022-23437|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23437]
>  * 
> [CVE-2020-14338|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14338]
> suggested: org.apache.ant:ant ~> 1.10.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn merged pull request #5360: HADOOP-18622. Upgrade ant to 1.10.13

2023-02-21 Thread via GitHub


ayushtkn merged PR #5360:
URL: https://github.com/apache/hadoop/pull/5360


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18622) Upgrade ant to 1.10.13

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691527#comment-17691527
 ] 

ASF GitHub Bot commented on HADOOP-18622:
-

ayushtkn merged PR #5360:
URL: https://github.com/apache/hadoop/pull/5360




> Upgrade ant to 1.10.13
> --
>
> Key: HADOOP-18622
> URL: https://issues.apache.org/jira/browse/HADOOP-18622
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: pull-request-available
> Attachments: hadoop_dep.log
>
>
> lnerabilities reported in org.apache.ant:ant:1.10.11
>  * 
> [CVE-2022-23437|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23437]
>  * 
> [CVE-2020-14338|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14338]
> suggested: org.apache.ant:ant ~> 1.10.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18622) Upgrade ant to 1.10.13

2023-02-21 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691528#comment-17691528
 ] 

Ayush Saxena commented on HADOOP-18622:
---

Committed to trunk.
Thanx [~aonikolaev] for the contribution!!!

Have added [~aonikolaev] as Hadoop Common Contributor, to assign the ticket.


> Upgrade ant to 1.10.13
> --
>
> Key: HADOOP-18622
> URL: https://issues.apache.org/jira/browse/HADOOP-18622
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: pull-request-available
> Attachments: hadoop_dep.log
>
>
> lnerabilities reported in org.apache.ant:ant:1.10.11
>  * 
> [CVE-2022-23437|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23437]
>  * 
> [CVE-2020-14338|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14338]
> suggested: org.apache.ant:ant ~> 1.10.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18622) Upgrade ant to 1.10.13

2023-02-21 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena updated HADOOP-18622:
--
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Upgrade ant to 1.10.13
> --
>
> Key: HADOOP-18622
> URL: https://issues.apache.org/jira/browse/HADOOP-18622
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: hadoop_dep.log
>
>
> lnerabilities reported in org.apache.ant:ant:1.10.11
>  * 
> [CVE-2022-23437|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23437]
>  * 
> [CVE-2020-14338|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14338]
> suggested: org.apache.ant:ant ~> 1.10.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #4435: YARN-11178. Avoid CPU busy idling and resource wasting in DelegationTokenRenewerPoolTracker thread

2023-02-21 Thread via GitHub


slfan1989 commented on PR #4435:
URL: https://github.com/apache/hadoop/pull/4435#issuecomment-1438320061

   @LennonChin Is this junit test error related to this pr? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5288: YARN-11394. Fix hadoop-yarn-server-resourcemanager module Java Doc Errors.

2023-02-21 Thread via GitHub


slfan1989 commented on PR #5288:
URL: https://github.com/apache/hadoop/pull/5288#issuecomment-1438320401

   @steveloughran Can you help review this pr? Thank you very much!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 opened a new pull request, #5420: YARN-11442. Refactor FederationInterceptorREST Code.

2023-02-21 Thread via GitHub


slfan1989 opened a new pull request, #5420:
URL: https://github.com/apache/hadoop/pull/5420

   JIRA: YARN-11442. Refactor FederationInterceptorREST Code.
   
   Fix code issues to make code more readable
   1. Add getOrCreateInterceptorForSubCluster method, the parameter is a 
parameter of type SubClusterInfo.
   2. Delete FederationInterceptorREST#getActiveSubclusters, add 
getActiveSubClusters in FederationStateStoreFacade, you can get the return 
result of Collection.
   3. Parallel query code, use Collection's parallelStream method instead of 
thread pool.
   4. Fix CheckStyle Issue.
   5. Improve code comments.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1438422789

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  30m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |   1m 29s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 5 new + 910 
unchanged - 0 fixed = 915 total (was 910)  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |   1m 22s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 
with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 5 new + 889 
unchanged - 0 fixed = 894 total (was 889)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 59s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 486 unchanged 
- 21 fixed = 488 total (was 507)  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  30m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 220m  7s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 354m 29s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/

[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691581#comment-17691581
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

hadoop-yetus commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1438422789

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m  0s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  30m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |   1m 29s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 5 new + 910 
unchanged - 0 fixed = 915 total (was 910)  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |   1m 22s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 
with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 5 new + 889 
unchanged - 0 fixed = 894 total (was 889)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 59s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 486 unchanged 
- 21 fixed = 488 total (was 507)  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 54s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  30m 19s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 220m  7s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 354m 29s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner 

[jira] [Commented] (HADOOP-18622) Upgrade ant to 1.10.13

2023-02-21 Thread Aleksandr Nikolaev (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691598#comment-17691598
 ] 

Aleksandr Nikolaev commented on HADOOP-18622:
-

[~ayushtkn]  Thanks! 

> Upgrade ant to 1.10.13
> --
>
> Key: HADOOP-18622
> URL: https://issues.apache.org/jira/browse/HADOOP-18622
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: hadoop_dep.log
>
>
> lnerabilities reported in org.apache.ant:ant:1.10.11
>  * 
> [CVE-2022-23437|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23437]
>  * 
> [CVE-2020-14338|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14338]
> suggested: org.apache.ant:ant ~> 1.10.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-18622) Upgrade ant to 1.10.13

2023-02-21 Thread Aleksandr Nikolaev (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691598#comment-17691598
 ] 

Aleksandr Nikolaev edited comment on HADOOP-18622 at 2/21/23 1:19 PM:
--

[~ayushtkn]  Thanks  for review! ! 


was (Author: aonikolaev):
[~ayushtkn]  Thanks! 

> Upgrade ant to 1.10.13
> --
>
> Key: HADOOP-18622
> URL: https://issues.apache.org/jira/browse/HADOOP-18622
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Aleksandr Nikolaev
>Assignee: Aleksandr Nikolaev
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: hadoop_dep.log
>
>
> lnerabilities reported in org.apache.ant:ant:1.10.11
>  * 
> [CVE-2022-23437|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-23437]
>  * 
> [CVE-2020-14338|https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14338]
> suggested: org.apache.ant:ant ~> 1.10.13



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5126: YARN-11370. [Federation] Refactor MemoryFederationStateStore code.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5126:
URL: https://github.com/apache/hadoop/pull/5126#issuecomment-1438493999

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 55s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 11s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  26m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  2s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 115m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5126/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5126 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 9c89c7a2a0d6 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 85c11313b73c5bd2e5d267bb4475efefd4874d36 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5126/7/testReport/ |
   | Max. process+thread count | 537 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5126/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache

[GitHub] [hadoop] hadoop-yetus commented on pull request #5420: YARN-11442. Refactor FederationInterceptorREST Code.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5420:
URL: https://github.com/apache/hadoop/pull/5420#issuecomment-1438587846

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 49s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 46s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  6s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 33s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 129m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5420/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5420 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 73b6fab0d583 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a6ae2e4caac915e8f6486edd34ba27a5cddc5616 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5420/1/testReport/ |
   | Max. process+thread count | 709 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/

[jira] [Commented] (HADOOP-18470) Release hadoop 3.3.5

2023-02-21 Thread Tobias Stadler (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691637#comment-17691637
 ] 

Tobias Stadler commented on HADOOP-18470:
-

Thank you!

> Release hadoop 3.3.5
> 
>
> Key: HADOOP-18470
> URL: https://issues.apache.org/jira/browse/HADOOP-18470
> Project: Hadoop Common
>  Issue Type: Task
>  Components: build
>Affects Versions: 3.3.5
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Blocker
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17424) Replace HTrace with No-Op tracer

2023-02-21 Thread Siyao Meng (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691638#comment-17691638
 ] 

Siyao Meng commented on HADOOP-17424:
-

Hi [~bpatel], it should be easy to cherry-pick this onto the 3.3.0 branch as it 
is in 3.3.2. However, this won't be done upstream since 3.3.0 branch is a 
released branch and is 
[frozen|https://github.com/apache/hadoop/tree/branch-3.3.0].

If you would like to, you can cherry-pick the 
[commit](1a205cc3adffa568c814a5241e041b08e2fcd3eb) and try to apply it to your 
fork's branch-3.3.0 branch and compile it.

> Replace HTrace with No-Op tracer
> 
>
> Key: HADOOP-17424
> URL: https://issues.apache.org/jira/browse/HADOOP-17424
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Siyao Meng
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Remove HTrace dependency as it is depending on old jackson jars. Use a no-op 
> tracer for now to eliminate potential security issues.
> The plan is to move part of the code in 
> [PR#1846|https://github.com/apache/hadoop/pull/1846] out here for faster 
> review.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #5288: YARN-11394. Fix hadoop-yarn-server-resourcemanager module Java Doc Errors.

2023-02-21 Thread via GitHub


steveloughran merged PR #5288:
URL: https://github.com/apache/hadoop/pull/5288


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5288: YARN-11394. Fix hadoop-yarn-server-resourcemanager module Java Doc Errors.

2023-02-21 Thread via GitHub


steveloughran commented on PR #5288:
URL: https://github.com/apache/hadoop/pull/5288#issuecomment-1438602451

   in trunk; cherrypick to branch-3.3 if you can...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #5412: HADOOP-18636 LocalDirAllocator cannot recover from directory tree deletion

2023-02-21 Thread via GitHub


steveloughran commented on PR #5412:
URL: https://github.com/apache/hadoop/pull/5412#issuecomment-1438609730

   spotbugs was complaining that mkdirs() result was ignored. logging at debug 
to get it to SFTU


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18636) LocalDirAllocator cannot recover from directory tree deletion during the life of a filesystem client

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691644#comment-17691644
 ] 

ASF GitHub Bot commented on HADOOP-18636:
-

steveloughran commented on PR #5412:
URL: https://github.com/apache/hadoop/pull/5412#issuecomment-1438609730

   spotbugs was complaining that mkdirs() result was ignored. logging at debug 
to get it to SFTU




> LocalDirAllocator cannot recover from directory tree deletion during the life 
> of a filesystem client
> 
>
> Key: HADOOP-18636
> URL: https://issues.apache.org/jira/browse/HADOOP-18636
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.4
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
>  Labels: pull-request-available
>
> The  s3a and abfs clients use LocalDirAllocator for allocating files in local 
> (temporary) storage for buffering blocks to write, and, for the s3a staging 
> committer, files being staged. 
> When initialized (or when the configuration key value is updated) 
> LocalDirAllocator enumerates all directories in the list and calls 
> {{mkdirs()}} to create them.
> when you ask actually for a file, it will look for the parent dir, and will 
> again call {{mkdirs()}}. 
> But before it does that, it looks to see if the dir has any space...if not it 
> is excluded from the list of directories with room for data.
> And guess what: directories which don't exist report as having no space. So 
> they get excluded -the recreation code doesn't get a chance to run.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran closed pull request #5226: HADOOP-18576. hadoop common javadocs

2023-02-21 Thread via GitHub


steveloughran closed pull request #5226: HADOOP-18576. hadoop common javadocs
URL: https://github.com/apache/hadoop/pull/5226


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18576) Java 11 JavaDoc fails due to missing package comments

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691645#comment-17691645
 ] 

ASF GitHub Bot commented on HADOOP-18576:
-

steveloughran closed pull request #5226: HADOOP-18576. hadoop common javadocs
URL: https://github.com/apache/hadoop/pull/5226




> Java 11 JavaDoc fails due to missing package comments
> -
>
> Key: HADOOP-18576
> URL: https://issues.apache.org/jira/browse/HADOOP-18576
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, common
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Vaughan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.9
>
>
> yetus is failing javadoc builds with import issues on package-info.java files 
> related to @InterfaceAudience/stability annotations
> We need to change how they are imported to fix this. Other modules have the 
> same issue
> This is happening because java11 javadoc requires a doc comment for all 
> packages with annotations. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5403: YARN-11340. [Federation] Improve SQLFederationStateStore DataSource Config.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5403:
URL: https://github.com/apache/hadoop/pull/5403#issuecomment-1438615216

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 53s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  12m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  10m  5s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  7s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  28m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  11m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  11m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  10m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 43s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/4/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged 
- 0 fixed = 165 total (was 164)  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 18s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m  4s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/4/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 23s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 181m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5403 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 3576c09da156 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3e0b26f86bbe5b51896920b7acee14805f87397e |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd6

[GitHub] [hadoop] hadoop-yetus commented on pull request #5403: YARN-11340. [Federation] Improve SQLFederationStateStore DataSource Config.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5403:
URL: https://github.com/apache/hadoop/pull/5403#issuecomment-1438617009

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  14m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  35m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  12m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  10m  1s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  11m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  11m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  10m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m 41s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/5/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn.txt)
 |  hadoop-yarn-project/hadoop-yarn: The patch generated 1 new + 164 unchanged 
- 0 fixed = 165 total (was 164)  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  27m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   1m  8s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/5/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-api.txt)
 |  hadoop-yarn-api in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 21s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 180m 32s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.conf.TestYarnConfigurationFields |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5403/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5403 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 924f7afcdeea 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3e0b26f86bbe5b51896920b7acee14805f87397e |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd6

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4967: HDFS-16791 WIP - client protocol and Filesystem apis implemented and …

2023-02-21 Thread via GitHub


steveloughran commented on code in PR #4967:
URL: https://github.com/apache/hadoop/pull/4967#discussion_r1113169004


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java:
##
@@ -1638,6 +1638,24 @@ public MultipartUploaderBuilder 
createMultipartUploader(Path basePath)
 return null;
   }
 
+  /**
+   * Return path of the enclosing root for a given path
+   * The enclosing root path is a common ancestor that should be used for temp 
and staging dirs
+   * as well as within encryption zones and other restricted directories.
+   *
+   * Call makeQualified on the param path to ensure the param path to ensure 
its part of the correct filesystem
+   *
+   * @param path file path to find the enclosing root path for
+   * @return a path to the enclosing root
+   * @throws IOException
+   */
+  @InterfaceAudience.Public
+  @InterfaceStability.Unstable
+  public Path getEnclosingRoot(Path path) throws IOException {
+this.makeQualified(path);

Review Comment:
   no need for `this.` prefix



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java:
##
@@ -1370,6 +1370,20 @@ public boolean hasPathCapability(Path path, String 
capability)
 }
   }
 
+  @Override
+  public Path getEnclosingRoot(Path path) throws IOException {
+InodeTree.ResolveResult res;
+try {
+  res = fsState.resolve(getUriPath(path), true);
+} catch (FileNotFoundException ex) {
+  throw new NotInMountpointException(path, "getEnclosingRoot");

Review Comment:
   include inner exception as cause



##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:
##
@@ -4904,6 +4904,24 @@ public CompletableFuture build() 
throws IOException {
 
   }
 
+  /**
+   * Return path of the enclosing root for a given path
+   * The enclosing root path is a common ancestor that should be used for temp 
and staging dirs
+   * as well as within encryption zones and other restricted directories.
+   *
+   * Call makeQualified on the param path to ensure the param path to ensure 
its part of the correct filesystem
+   *
+   * @param path file path to find the enclosing root path for
+   * @return a path to the enclosing root
+   * @throws IOException

Review Comment:
   nit: add something here, even if just "IO failure". the more detail the 
better



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestGetEnclosingRoot.java:
##
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Test;
+
+import static org.junit.Assert.*;
+
+
+public class TestGetEnclosingRoot {

Review Comment:
   should extend HadoopTestBase for timeouts, thread naming.



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetEnclosingRoot.java:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.contract;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security

[GitHub] [hadoop] steveloughran commented on a diff in pull request #4967: HDFS-16791 WIP - client protocol and Filesystem apis implemented and …

2023-02-21 Thread via GitHub


steveloughran commented on code in PR #4967:
URL: https://github.com/apache/hadoop/pull/4967#discussion_r1113171936


##
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java:
##
@@ -1919,6 +1933,19 @@ public Collection 
getAllStoragePolicies()
   }
   return allPolicies;
 }
+
+@Override
+public Path getEnclosingRoot(Path path) throws IOException {
+  InodeTree.ResolveResult res;
+  try {
+res = fsState.resolve((path.toString()), true);
+  } catch (FileNotFoundException ex) {
+throw new NotInMountpointException(path, "getEnclosingRoot");

Review Comment:
   include inner exception as cause



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractGetEnclosingRoot.java:
##
@@ -0,0 +1,100 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs.contract;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+
+public abstract class AbstractContractGetEnclosingRoot extends 
AbstractFSContractTestBase {
+  private static final Logger LOG = 
LoggerFactory.getLogger(AbstractContractGetEnclosingRoot.class);
+
+  @Override
+  public void setup() throws Exception {

Review Comment:
   superflous; cut



##
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestGetEnclosingRoot.java:
##
@@ -0,0 +1,95 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.fs;
+
+import java.io.IOException;
+import java.security.PrivilegedExceptionAction;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.junit.Test;
+
+import static org.junit.Assert.*;
+
+
+public class TestGetEnclosingRoot {
+  @Test
+  public void testEnclosingRootEquivalence() throws IOException {
+FileSystem fs = getFileSystem();
+Path root = path("/");
+Path foobar = path("/foo/bar");
+
+assertEquals(fs.getEnclosingRoot(foobar), root);

Review Comment:
   all the assertequals asserts are the wrong way round for the generated 
message. please swap. or move to assertJ



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5419: HDFS-16928. Both getCurrentEditLogTxid and getEditsFromTxid should be OperationCategory.WRITE

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5419:
URL: https://github.com/apache/hadoop/pull/5419#issuecomment-1438669964

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 32s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 200m 17s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 318m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5419/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5419 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 0a581949f81c 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 954b050371777292407b57dce8d66fc511350019 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5419/1/testReport/ |
   | Max. process+thread count | 3721 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5419/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-uns

[jira] [Updated] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-02-21 Thread Ahmar Suhail (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmar Suhail updated HADOOP-18565:
--
Summary: AWS SDK V2 - Complete outstanding items  (was: AWS SDK V2 - Add in 
support of custom signers)

> AWS SDK V2 - Complete outstanding items
> ---
>
> Key: HADOOP-18565
> URL: https://issues.apache.org/jira/browse/HADOOP-18565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>
> S3A allows users configure to custom signers, add in support for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-02-21 Thread Ahmar Suhail (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmar Suhail updated HADOOP-18565:
--
Description: 
The following work remains to complete the SDK upgrade work:
 * S3A allows users configure to custom signers, add in support for this.
 * Remove SDK V1 bundle dependency
 * Update `getRegion()` logic to use retries. 
 * Add in progress listeners for `S3ABlockOutputStream`
 * Fix any failing tests.

  was:S3A allows users configure to custom signers, add in support for this.


> AWS SDK V2 - Complete outstanding items
> ---
>
> Key: HADOOP-18565
> URL: https://issues.apache.org/jira/browse/HADOOP-18565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>
> The following work remains to complete the SDK upgrade work:
>  * S3A allows users configure to custom signers, add in support for this.
>  * Remove SDK V1 bundle dependency
>  * Update `getRegion()` logic to use retries. 
>  * Add in progress listeners for `S3ABlockOutputStream`
>  * Fix any failing tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ahmarsuhail opened a new pull request, #5421: Feature hadoop 18565 custom signers

2023-02-21 Thread via GitHub


ahmarsuhail opened a new pull request, #5421:
URL: https://github.com/apache/hadoop/pull/5421

   ### Description of PR
   
   Jira: https://issues.apache.org/jira/browse/HADOOP-18565
   
   This PR:
   
   - Removes TM dependency, as it’s in the bundle now
   - Removes SDK V1 bundle dependency, only need `aws-java-sdk-core` now
   - Fixes` SIGNING_ALGORITHM_STS` identifier, I don’t think this was getting 
picked up correctly until now.
   - Removes V1 client creation and config code. CSE code has been removed too, 
will be added back in once S3EC becomes available in V2.
   - Adds retries to `getRegion()` logic. Also, if no endpoint is specified, 
don’t set the default endpoint. Let the SDK figure it out from the region.
   - You can’t currently add progress listeners to individual operations in the 
SDK. So modify existing progress listeners for MPU and putObject in 
`S3ABlockOutputStream`. 
   - Updates reflection methods in `S3AUtils` to use generics, so it can be 
used by V1 Cred provider, V2 cred provider, and signers.
   - Adds in custom signer support. No signer factory in V2, so create our own. 
   - CopyEncryptionParams logic - Use src object settings if present, otherwise 
encrypt using FS settings.
   - ITestS3AEndpointRegion - There’s no client.getRegionName(), so to get 
these tests to work, have used execution interceptors that throw an exception 
to exit early.
   - Removes some tests in TestNetworkBinding, looks like they were getting 
skipped anyway.
   - Removes ITestMarkerTool.testRunWrongBucket - I don’t have a good 
understanding of ITestMarkerTool, but I believe with the new `getRegion()` 
logic, which will throw an `UnknownStoreException`, we no longer need this 
test. It currently fails, because it tries to intialise a FS for a bucket which 
does not exist.
   
   ### Still TODO
   
   - Cache regions per FS to prevent multiple head bucket calls, warn if no 
region is configured. 
   - Look at probing logic, and how it can be optimised. As a headBucket call 
may already have been made, we can avoid another probe.
   - GetObjectMetadata() will fail for base path ("/") (empty key), is this ok?
   
   ### How was this patch tested?
   
   Tested in `eu-west-1` by running mvn -Dparallel-tests -DtestsThreadCount=16 
clean verify.
   
   All tests pass now. 
   
   Note, to get all tests to pass, we should now configure our config with the 
bucket specific region. Eg: 
   
   ```
   
   fs.s3a.bucket..endpoint.region
   eu-west-1
   
   ```
   
   This is so tests that rely on public buckets don't fail. They will do a 
bucket probe to get the bucket region and configure the region. If the region 
was set using `fs.s3a.endpoint.region`, tests which use public buckets will 
also end up using this region, and since there is no cross region support, they 
end up failing. 
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-18638) Encryption behaviour

2023-02-21 Thread Ahmar Suhail (Jira)
Ahmar Suhail created HADOOP-18638:
-

 Summary: Encryption behaviour 
 Key: HADOOP-18638
 URL: https://issues.apache.org/jira/browse/HADOOP-18638
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Ahmar Suhail


When doing a copy, S3A always uses encryption configuration of the filesystem, 
rather than the source object. This behaviour may not have been intended, as in 
`RequestFactoryImpl.copyEncryptionParameters()`  it does copy source object 
encryption properties 
[here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java#L336]
 , but a missing return statement means it ends up using the FS settings 
anyway. 

 

Proposed:
 * If the copy is called by rename, always preserve source object encryption 
properties. 
 * For all other copies, use current FS encryption settings. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18638) Encryption behaviour on copy

2023-02-21 Thread Ahmar Suhail (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmar Suhail updated HADOOP-18638:
--
Summary: Encryption behaviour on copy  (was: Encryption behaviour )

> Encryption behaviour on copy
> 
>
> Key: HADOOP-18638
> URL: https://issues.apache.org/jira/browse/HADOOP-18638
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmar Suhail
>Priority: Major
>
> When doing a copy, S3A always uses encryption configuration of the 
> filesystem, rather than the source object. This behaviour may not have been 
> intended, as in `RequestFactoryImpl.copyEncryptionParameters()`  it does copy 
> source object encryption properties 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java#L336]
>  , but a missing return statement means it ends up using the FS settings 
> anyway. 
>  
> Proposed:
>  * If the copy is called by rename, always preserve source object encryption 
> properties. 
>  * For all other copies, use current FS encryption settings. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-18638) Encryption behaviour on copy

2023-02-21 Thread Daniel Carl Jones (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Carl Jones updated HADOOP-18638:
---
Component/s: fs/s3

> Encryption behaviour on copy
> 
>
> Key: HADOOP-18638
> URL: https://issues.apache.org/jira/browse/HADOOP-18638
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Ahmar Suhail
>Priority: Major
>
> When doing a copy, S3A always uses encryption configuration of the 
> filesystem, rather than the source object. This behaviour may not have been 
> intended, as in `RequestFactoryImpl.copyEncryptionParameters()`  it does copy 
> source object encryption properties 
> [here|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/RequestFactoryImpl.java#L336]
>  , but a missing return statement means it ends up using the FS settings 
> anyway. 
>  
> Proposed:
>  * If the copy is called by rename, always preserve source object encryption 
> properties. 
>  * For all other copies, use current FS encryption settings. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5312: YARN-11375. [Federation] Support refreshAdminAcls、refreshServiceAcls API's for Federation.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5312:
URL: https://github.com/apache/hadoop/pull/5312#issuecomment-1438784034

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  5s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |   3m 58s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/12/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   8m 51s | 
[/branch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/12/artifact/out/branch-compile-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  compile  |  12m 20s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 48s |  |  trunk passed  |
   | -1 :x: |  javadoc  |   0m 59s | 
[/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/12/artifact/out/branch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in trunk failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.  |
   | +1 :green_heart: |  javadoc  |   3m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  27m 35s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  27m 57s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  10m 12s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  cc  |  10m 12s |  |  the patch passed  |
   | -1 :x: |  javac  |  10m 12s | 
[/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/12/artifact/out/results-compile-javac-hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  
hadoop-yarn-project_hadoop-yarn-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 5 new + 725 
unchanged - 0 fixed = 730 total (was 725)  |
   | +1 :green_heart: |  compile  |   9m 35s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  cc  |   9m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m 35s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 38s |  |  
hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 7 unchanged - 9 
fixed = 7 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   3m 43s |  |  the patch passed  |
   | -1 :x: |  javadoc  |   1m  0s | 
[/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/12/artifact/out/patch-javadoc-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  hadoop-yarn-server-resourcemanager in the patch failed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1

[GitHub] [hadoop] virajjasani commented on pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1438895176

   Tests are passing locally. Added more logs to check more on why they are 
failing on Jenkins build.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691731#comment-17691731
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1438895176

   Tests are passing locally. Added more logs to check more on why they are 
failing on Jenkins build.




> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1438896144

   @jojochuang could you please take a look in the meantime?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691733#comment-17691733
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1438896144

   @jojochuang could you please take a look in the meantime?




> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5412: HADOOP-18636 LocalDirAllocator cannot recover from directory tree deletion

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5412:
URL: https://github.com/apache/hadoop/pull/5412#issuecomment-1438920330

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 220m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5412/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5412 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux cfad1a517cb2 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 236adfefd67b469cb0bab443e5422fdb3b582d39 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5412/2/testReport/ |
   | Max. process+thread count | 1291 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5412/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

[jira] [Commented] (HADOOP-18636) LocalDirAllocator cannot recover from directory tree deletion during the life of a filesystem client

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691741#comment-17691741
 ] 

ASF GitHub Bot commented on HADOOP-18636:
-

hadoop-yetus commented on PR #5412:
URL: https://github.com/apache/hadoop/pull/5412#issuecomment-1438920330

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  45m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 13s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 44s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 24s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |  22m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |  20m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 15s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  0s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 220m 17s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5412/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5412 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux cfad1a517cb2 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 236adfefd67b469cb0bab443e5422fdb3b582d39 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5412/2/testReport/ |
   | Max. process+thread count | 1291 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5412/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   




> LocalDirAllocator cannot recover from directory tree deleti

[GitHub] [hadoop] rdingankar commented on a diff in pull request #5397: HDFS-16917 Add transfer rate quantile metrics for DataNode reads

2023-02-21 Thread via GitHub


rdingankar commented on code in PR #5397:
URL: https://github.com/apache/hadoop/pull/5397#discussion_r1113432082


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSUtil.java:
##
@@ -1936,4 +1936,20 @@ public static boolean isParentEntry(final String path, 
final String parent) {
 return path.charAt(parent.length()) == Path.SEPARATOR_CHAR
 || parent.equals(Path.SEPARATOR);
   }
+
+  /**
+   * Calculate the transfer rate in megabytes/second. Return -1 for any 
negative input.
+   * @param bytes bytes
+   * @param durationMS duration in milliseconds
+   * @return the number of megabytes/second of the transfer rate
+  */
+  public static long transferRateMBs(long bytes, long durationMS) {
+if (bytes < 0 || durationMS < 0) {
+  return -1;
+}
+if (durationMS == 0) {
+  durationMS = 1;
+}
+return bytes / (1024 * 1024) * 1000 / durationMS;

Review Comment:
   The Metrics take long, so eventually we will have to convert to long 
anyways. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5421: HADOOP-18565. Completes outstanding items for the SDK V2 upgrade.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1438999564

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 24 new or modified test files.  |
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  20m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m  5s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  25m 20s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   1m 14s | 
[/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html)
 |  hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 
extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  26m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 58s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |  24m 31s | 
[/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 6 new + 2840 unchanged - 2 
fixed = 2846 total (was 2842)  |
   | +1 :green_heart: |  compile  |  21m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |  21m 38s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 6 new + 2641 unchanged - 2 fixed 
= 2647 total (was 2643)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 51s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 28 new + 43 unchanged - 3 fixed = 71 total (was 
46)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javadoc  |   0m 39s | 
[/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | +0 :ok: |  spotbugs  |   0m 28s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   1m 27s | 
[/new-spotbugs-hadoop-tools_hadoop-aws

[jira] [Commented] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691766#comment-17691766
 ] 

ASF GitHub Bot commented on HADOOP-18565:
-

hadoop-yetus commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1438999564

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  2s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  2s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  2s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 24 new or modified test files.  |
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  20m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m  5s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  25m 20s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  21m 40s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 58s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   1m  3s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 43s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   1m 14s | 
[/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html)
 |  hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 
extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  26m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 58s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |  24m 31s | 
[/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 6 new + 2840 unchanged - 2 
fixed = 2846 total (was 2842)  |
   | +1 :green_heart: |  compile  |  21m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |  21m 38s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 6 new + 2641 unchanged - 2 fixed 
= 2647 total (was 2643)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 51s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 28 new + 43 unchanged - 3 fixed = 71 total (was 
46)  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javadoc  |   0m 39s | 
[/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt

[jira] [Updated] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-18565:

Labels: pull-request-available  (was: )

> AWS SDK V2 - Complete outstanding items
> ---
>
> Key: HADOOP-18565
> URL: https://issues.apache.org/jira/browse/HADOOP-18565
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.4.0
>Reporter: Ahmar Suhail
>Priority: Major
>  Labels: pull-request-available
>
> The following work remains to complete the SDK upgrade work:
>  * S3A allows users configure to custom signers, add in support for this.
>  * Remove SDK V1 bundle dependency
>  * Update `getRegion()` logic to use retries. 
>  * Add in progress listeners for `S3ABlockOutputStream`
>  * Fix any failing tests.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5421: HADOOP-18565. Completes outstanding items for the SDK V2 upgrade.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1439017361

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 24 new or modified test files.  |
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  22m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 22s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  23m 15s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 21s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 47s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 54s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   1m 19s | 
[/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html)
 |  hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 
extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  24m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 56s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |  22m 30s | 
[/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 6 new + 2840 unchanged - 2 
fixed = 2846 total (was 2842)  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |  20m 26s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 6 new + 2640 unchanged - 2 fixed 
= 2646 total (was 2642)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 32s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 28 new + 43 unchanged - 3 fixed = 71 total (was 
46)  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javadoc  |   0m 48s | 
[/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  hadoop-aws in the patch failed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08.  |
   | +0 :ok: |  spotbugs  |   0m 39s |  |  hadoop-project has no data from 
spotbugs  |
   | -1 :x: |  spotbugs  |   1m 34s | 
[/new-spotbugs-hadoop-tools_hadoop-aws

[jira] [Commented] (HADOOP-18565) AWS SDK V2 - Complete outstanding items

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691770#comment-17691770
 ] 

ASF GitHub Bot commented on HADOOP-18565:
-

hadoop-yetus commented on PR #5421:
URL: https://github.com/apache/hadoop/pull/5421#issuecomment-1439017361

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 24 new or modified test files.  |
    _ feature-HADOOP-18073-s3a-sdk-upgrade Compile Tests _ |
   | +0 :ok: |  mvndep  |  22m 40s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m 22s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  compile  |  23m 15s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |  20m 21s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 47s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  mvnsite  |   1m 41s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  
feature-HADOOP-18073-s3a-sdk-upgrade passed with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +0 :ok: |  spotbugs  |   0m 54s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | -1 :x: |  spotbugs  |   1m 19s | 
[/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/branch-spotbugs-hadoop-tools_hadoop-aws-warnings.html)
 |  hadoop-tools/hadoop-aws in feature-HADOOP-18073-s3a-sdk-upgrade has 1 
extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  24m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 56s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javac  |  22m 30s | 
[/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04.txt)
 |  root-jdkUbuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 generated 6 new + 2840 unchanged - 2 
fixed = 2846 total (was 2842)  |
   | +1 :green_heart: |  compile  |  20m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | -1 :x: |  javac  |  20m 26s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt)
 |  root-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08 with JDK Private 
Build-1.8.0_352-8u352-ga-1~20.04-b08 generated 6 new + 2640 unchanged - 2 fixed 
= 2646 total (was 2642)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 32s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 28 new + 43 unchanged - 3 fixed = 71 total (was 
46)  |
   | +1 :green_heart: |  mvnsite  |   1m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 13s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | -1 :x: |  javadoc  |   0m 48s | 
[/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5421/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_352-8u352-ga-1~20.04-b08.txt

[jira] [Commented] (HADOOP-18637) S3A to support upload of files greater than 2 GB using DiskBlocks

2023-02-21 Thread Mukund Thakur (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691825#comment-17691825
 ] 

Mukund Thakur commented on HADOOP-18637:


As discussed offline, following changes will be required. 
 * Introduce a new config to disable multipart upload everywhere and enable 
just a large file upload.
 * Error in public S3AFS.createMultipartUploader based on above config.
 * Error in staging committer based on above config.
 * Error in magic committer based on above config.
 * Error in write operations helper based on above config. 
 * Add hasCapability(isMultiPartAllowed, path) use config.
 * If multipart upload is disabled we only upload via Disk. Add check for this.

> S3A to support upload of files greater than 2 GB using DiskBlocks
> -
>
> Key: HADOOP-18637
> URL: https://issues.apache.org/jira/browse/HADOOP-18637
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Harshit Gupta
>Assignee: Harshit Gupta
>Priority: Major
>
> Use S3A Diskblocks to support the upload of files greater than 2 GB using 
> DiskBlocks. Currently, the max upload size of a single block is ~2GB. 
> cc: [~mthakur] [~ste...@apache.org] [~mehakmeet] 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] slfan1989 commented on pull request #5288: YARN-11394. Fix hadoop-yarn-server-resourcemanager module Java Doc Errors.

2023-02-21 Thread via GitHub


slfan1989 commented on PR #5288:
URL: https://github.com/apache/hadoop/pull/5288#issuecomment-1439251206

   @ayushtkn @steveloughran Thank you for your help to review the code, I will 
cherrypick to branch3-3.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #4834: HDFS-16753. WebHDFSHandler should reject non-compliant requests

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #4834:
URL: https://github.com/apache/hadoop/pull/4834#issuecomment-1439253717

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  43m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 206m 31s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4834/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 324m 17s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4834/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4834 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux e54a4d213c3a 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 38af11ab5961f60c327bfe47d42b2fb0d051a452 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4834/4/testReport/ |
   | Max. process+thread count | 3598 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4834/4/console |
   | versions | git=2.2

[GitHub] [hadoop] hadoop-yetus commented on pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1439256459

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  53m  5s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 486 unchanged - 21 
fixed = 486 total (was 507)  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 239m 21s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 375m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithShortCircuitRead |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5418 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux db8f0b1a566a 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25d04a8be372b1b7aabde516d0130bebac929c7a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/2/testReport/ |
   | Max. process+thread count | 2256 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdf

[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691834#comment-17691834
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

hadoop-yetus commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1439256459

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  53m  5s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/2/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 56s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 34s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 27s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 486 unchanged - 21 
fixed = 486 total (was 507)  |
   | +1 :green_heart: |  mvnsite  |   1m 38s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 239m 21s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 375m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithShortCircuitRead |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5418 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux db8f0b1a566a 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25d04a8be372b1b7aabde516d0130bebac929c7a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build

[GitHub] [hadoop] hadoop-yetus commented on pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1439258598

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 486 unchanged - 21 
fixed = 486 total (was 507)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 239m  2s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 372m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5418 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 924f99f65550 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25d04a8be372b1b7aabde516d0130bebac929c7a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/3/testReport/ |
   | Max. process+thread count | 2324 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 h

[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691835#comment-17691835
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

hadoop-yetus commented on PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#issuecomment-1439258598

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 8 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  49m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 39s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 14s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 486 unchanged - 21 
fixed = 486 total (was 507)  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 239m  2s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 372m 53s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5418 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 924f99f65550 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 25d04a8be372b1b7aabde516d0130bebac929c7a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5418/3/testReport/ |
   | Max. process+thread count | 2324 (vs. ulimit of 5500) |
   | modules | C: had

[GitHub] [hadoop] hadoop-yetus commented on pull request #4725: HDFS-16688. Unresolved Hosts during startup are not synced by JournalNodes

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #4725:
URL: https://github.com/apache/hadoop/pull/4725#issuecomment-1439281473

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 24s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  33m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  28m 38s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 225m  8s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 42s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 356m 45s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4725/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/4725 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets xmllint |
   | uname | Linux a7155e5ee23d 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1a2b49c34062fb00339ee49af9c961a3686decca |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4725/7/testReport/ |
   | Max. process+thread count | 2257 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-4725/7/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about 

[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691868#comment-17691868
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

Apache9 commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113753403


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   AsyncAppender itself supports logging to file? Or we need to let it wrap 
another appender?



##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   I can not recall how to config log4j1 AsyncAppender, but I believe we need 
to use different append ref for datanode and namenode? At least they need to 
log to different log files?



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java:
##
@@ -115,8 +111,11 @@ private String trimLine(String valueStr) {
 .substring(0, maxLogLineLength) + "...");
   }
 
-  private static boolean hasAppenders(org.apache.log4j.Logger logger) {
-return logger.getAllAppenders().hasMoreElements();
+  // TODO : hadoop-logging module to hide log4j implementation details, this 
method
+  //  can directly call utility from hadoop-logging.
+  private static boolean hasAppenders(Logger logger) {

Review Comment:
   Do we still need this method since we do not need to setup async append 
programmingly?





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Apache9 commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


Apache9 commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113753403


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   AsyncAppender itself supports logging to file? Or we need to let it wrap 
another appender?



##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   I can not recall how to config log4j1 AsyncAppender, but I believe we need 
to use different append ref for datanode and namenode? At least they need to 
log to different log files?



##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java:
##
@@ -115,8 +111,11 @@ private String trimLine(String valueStr) {
 .substring(0, maxLogLineLength) + "...");
   }
 
-  private static boolean hasAppenders(org.apache.log4j.Logger logger) {
-return logger.getAllAppenders().hasMoreElements();
+  // TODO : hadoop-logging module to hide log4j implementation details, this 
method
+  //  can directly call utility from hadoop-logging.
+  private static boolean hasAppenders(Logger logger) {

Review Comment:
   Do we still need this method since we do not need to setup async append 
programmingly?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #5415: HDFS-16926. Fix Typo of ImageServlet#Put()

2023-02-21 Thread via GitHub


haiyang1987 commented on PR #5415:
URL: https://github.com/apache/hadoop/pull/5415#issuecomment-1439347441

   @ZanderXu @tomscut please help me review this PR when you are available, 
Thanks.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #5414: HDFS-16899. Fix TestAddOverReplicatedStripedBlocks#testProcessOverReplicatedAndCorruptStripedBlock failed

2023-02-21 Thread via GitHub


haiyang1987 commented on PR #5414:
URL: https://github.com/apache/hadoop/pull/5414#issuecomment-1439347203

   @ZanderXu @tomscut  please help me review this PR when you are available, 
Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113758871


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java:
##
@@ -115,8 +111,11 @@ private String trimLine(String valueStr) {
 .substring(0, maxLogLineLength) + "...");
   }
 
-  private static boolean hasAppenders(org.apache.log4j.Logger logger) {
-return logger.getAllAppenders().hasMoreElements();
+  // TODO : hadoop-logging module to hide log4j implementation details, this 
method
+  //  can directly call utility from hadoop-logging.
+  private static boolean hasAppenders(Logger logger) {

Review Comment:
   I was thinking the same, but we have this check for metric logger:
   
   ```
   // Skip querying metrics if there are no known appenders.
   if (!metricsLog.isInfoEnabled() || !hasAppenders(metricsLog)
   || objectName == null) {
 return;
   }
   
   ```



##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   As far as AsyncAppender is concerned, we can use same ref. It's the other 
appenders that make things different if we want to include them.



##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   It turns out both log4j1 and log4j2 allows wrapping other appenders within 
AsyncAppender. However, log4j1 supports this through xml only, whereas log4j2 
supports it using both xml as well as properties.
   
   log4j1 [AsyncAppender 
javadoc](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/AsyncAppender.html):

   > **Important note**: The AsyncAppender can only be script configured using 
the 
[DOMConfigurator](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/xml/DOMConfigurator.html).
   
   
   
   
   Hence I was thinking, instead of creating a new custom appender that 
eventually wraps other appender with AsyncAppender and make this overall log4j2 
migration even more painful, we can wait until we replace log4j.properties with 
log4j2.properties and then we can configure AsyncAppender to wrap RFA, I have 
already tested this out locally.
   
   btw this properties file is used for test purpose only.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691882#comment-17691882
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113758871


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java:
##
@@ -115,8 +111,11 @@ private String trimLine(String valueStr) {
 .substring(0, maxLogLineLength) + "...");
   }
 
-  private static boolean hasAppenders(org.apache.log4j.Logger logger) {
-return logger.getAllAppenders().hasMoreElements();
+  // TODO : hadoop-logging module to hide log4j implementation details, this 
method
+  //  can directly call utility from hadoop-logging.
+  private static boolean hasAppenders(Logger logger) {

Review Comment:
   I was thinking the same, but we have this check for metric logger:
   
   ```
   // Skip querying metrics if there are no known appenders.
   if (!metricsLog.isInfoEnabled() || !hasAppenders(metricsLog)
   || objectName == null) {
 return;
   }
   
   ```



##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   As far as AsyncAppender is concerned, we can use same ref. It's the other 
appenders that make things different if we want to include them.



##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   It turns out both log4j1 and log4j2 allows wrapping other appenders within 
AsyncAppender. However, log4j1 supports this through xml only, whereas log4j2 
supports it using both xml as well as properties.
   
   log4j1 [AsyncAppender 
javadoc](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/AsyncAppender.html):

   > **Important note**: The AsyncAppender can only be script configured using 
the 
[DOMConfigurator](https://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/xml/DOMConfigurator.html).
   
   
   
   
   Hence I was thinking, instead of creating a new custom appender that 
eventually wraps other appender with AsyncAppender and make this overall log4j2 
migration even more painful, we can wait until we replace log4j.properties with 
log4j2.properties and then we can configure AsyncAppender to wrap RFA, I have 
already tested this out locally.
   
   btw this properties file is used for test purpose only.





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logge

[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113770486


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   In fact, this is the reason why i have added some TODO comments with this 
patch as we anyways need to rework on async appender config as part of log4j2 
properties. This TODO will help to use appender ref within AsyncAppender in 
log4j2.properties. Once that work is complete, this TODO also needs to be 
removed as part of it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691888#comment-17691888
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113770486


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   In fact, this is the reason why i have added some TODO comments with this 
patch as we anyways need to rework on async appender config as part of log4j2 
properties. This TODO will help to use appender ref within AsyncAppender in 
log4j2.properties. Once that work is complete, this TODO also needs to be 
removed as part of it.





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #5295: YARN-11404. Add junit5 dependency to hadoop-mapreduce-client-app to fix few unit test failure

2023-02-21 Thread via GitHub


ayushtkn commented on PR #5295:
URL: https://github.com/apache/hadoop/pull/5295#issuecomment-1439366770

   This seems to have broken the mapreduce tests.
   
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1143/testReport/
   
   Reverting this fixed it locally for me, I think there ain't any followup 
ticket as Steve asked for either. Have reverted this.
   
   Tests passed locally
   ```
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running 
org.apache.hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp
   [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
49.528 s - in org.apache.hadoop.mapreduce.v2.TestSpeculativeExecutionWithMRApp
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0
   
   [INFO] ---
   [INFO]  T E S T S
   [INFO] ---
   [INFO] Running org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryParsing
   [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
10.986 s - in org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryParsing
   [INFO] Running org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryEvents
   [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.179 
s - in org.apache.hadoop.mapreduce.v2.hs.TestJobHistoryEvents
   [INFO] 
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0
   ```
   
   PS. The PR was closed not merged, not sure how & why but it was initially 
confusing to me


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113759661


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   As far as AsyncAppender is concerned, we can use same ref. It's the other 
appenders that make things different if we want to include them.
   
   For this example, `hdfs.audit.logger` is basically used only for 
FSNamesystem audit logs.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691891#comment-17691891
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113759661


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   As far as AsyncAppender is concerned, we can use same ref. It's the other 
appenders that make things different if we want to include them.
   
   For this example, `hdfs.audit.logger` is basically used only for 
FSNamesystem audit logs.





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Apache9 commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


Apache9 commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113776679


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java:
##
@@ -115,8 +111,11 @@ private String trimLine(String valueStr) {
 .substring(0, maxLogLineLength) + "...");
   }
 
-  private static boolean hasAppenders(org.apache.log4j.Logger logger) {
-return logger.getAllAppenders().hasMoreElements();
+  // TODO : hadoop-logging module to hide log4j implementation details, this 
method
+  //  can directly call utility from hadoop-logging.
+  private static boolean hasAppenders(Logger logger) {

Review Comment:
   OK.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691893#comment-17691893
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

Apache9 commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113776679


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/MetricsLoggerTask.java:
##
@@ -115,8 +111,11 @@ private String trimLine(String valueStr) {
 .substring(0, maxLogLineLength) + "...");
   }
 
-  private static boolean hasAppenders(org.apache.log4j.Logger logger) {
-return logger.getAllAppenders().hasMoreElements();
+  // TODO : hadoop-logging module to hide log4j implementation details, this 
method
+  //  can directly call utility from hadoop-logging.
+  private static boolean hasAppenders(Logger logger) {

Review Comment:
   OK.





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691894#comment-17691894
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113779366


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   Oh sorry, please ignore above comment. `hdfs.audit.logger` is only meant to 
be used for namenode if that was the question (datanode vs namenode log files).





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113779366


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   Oh sorry, please ignore above comment. `hdfs.audit.logger` is only meant to 
be used for namenode if that was the question (datanode vs namenode log files).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113759661


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   ~~ As far as AsyncAppender is concerned, we can use same ref. It's the other 
appenders that make things different if we want to include them.
   
   For this example, `hdfs.audit.logger` is basically used only for 
FSNamesystem audit logs. ~~



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691895#comment-17691895
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113759661


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT

Review Comment:
   ~~ As far as AsyncAppender is concerned, we can use same ref. It's the other 
appenders that make things different if we want to include them.
   
   For this example, `hdfs.audit.logger` is basically used only for 
FSNamesystem audit logs. ~~





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #5311: MAPREDUCE-7431. ShuffleHandler refactor and fix after Netty4 upgrade.

2023-02-21 Thread via GitHub


ayushtkn commented on PR #5311:
URL: https://github.com/apache/hadoop/pull/5311#issuecomment-1439380291

   the test is failing now:
   
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1143/testReport/junit/org.apache.hadoop.mapred/TestShuffleHandler/testMapFileAccess/
   
   Though it passes locally for me.
   Looks like something specific to jenkins:
   ```
   Caused by: java.io.IOException: Owner 'jenkins' for path 
/tmp/test-shuffle-channel-handler1169959588626711767/job_1_0001/testUser/attempt_1_0001_m_01_0/index
 did not match expected owner 'testUser'
   ```
   
   anyone planning to fix or should we revert this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113789406


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   We can introduce custom Async appender that wraps RFA:
   
   ```
   public class AsyncRFAAppender extends AsyncAppender {
   
 private long maxFileSize = 10*1024*1024;
 private int maxBackupIndex = 1;
 private long nextRollover = 0;
 protected String fileName = null;
   
 private boolean addRfaToAsync = false;
   
  public void append(final LoggingEvent event) {
 super.append(event);
  }
   
 public void setBlocking(final boolean value) {
addRfaToAsync = true;
RFA rfa = new RFA(values); // maxFileSize, maxBackupIndex etc
this.addAppender(rfa);
 }
   
   }
   
   ```
   
   But this will be temporary only as log4j2 needs rework anyways.
   WDYT? Shall we introduce new custom appender for temporary time duration?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691901#comment-17691901
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113789406


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   We can introduce custom Async appender that wraps RFA:
   
   ```
   public class AsyncRFAAppender extends AsyncAppender {
   
 private long maxFileSize = 10*1024*1024;
 private int maxBackupIndex = 1;
 private long nextRollover = 0;
 protected String fileName = null;
   
 private boolean addRfaToAsync = false;
   
  public void append(final LoggingEvent event) {
 super.append(event);
  }
   
 public void setBlocking(final boolean value) {
addRfaToAsync = true;
RFA rfa = new RFA(values); // maxFileSize, maxBackupIndex etc
this.addAppender(rfa);
 }
   
   }
   
   ```
   
   But this will be temporary only as log4j2 needs rework anyways.
   WDYT? Shall we introduce new custom appender for temporary time duration?





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hfutatzhanghb commented on pull request #5276: HDFS-16882. RBF: Add cache hit rate metric in MountTableResolver#getDestinationForPath

2023-02-21 Thread via GitHub


hfutatzhanghb commented on PR #5276:
URL: https://github.com/apache/hadoop/pull/5276#issuecomment-1439392335

   > 
   
   @tomscut  OK, I will do this soonly~ thanks tao brother


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5302: YARN-11221. [Federation] Add replaceLabelsOnNodes, replaceLabelsOnNode REST APIs for Router.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5302:
URL: https://github.com/apache/hadoop/pull/5302#issuecomment-1439402375

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  7s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 16s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 14s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  23m 33s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 15s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 23s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  98m 38s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 32s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 228m  6s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/20/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5302 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 6c50efcad7f1 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 78c8180f7c3ff3c8472d0205061665fde0a64566 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5302/20/testReport/ |
   | Max. process+thread count | 934 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router 
U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-serv

[GitHub] [hadoop] hadoop-yetus commented on pull request #5394: YARN-5604. [Federation] Add versioning for FederationStateStore.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5394:
URL: https://github.com/apache/hadoop/pull/5394#issuecomment-1439414674

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 23s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 51s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   3m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   3m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 15s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 51s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 28s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  24m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 44s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   3m 44s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   3m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   3m 18s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  4s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 14s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   3m  5s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 116m 14s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 251m 46s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5394/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5394 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 31f0c107de14 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 12130aa3c9b34b97d9540ce348862e6fd7dbdc2d |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5394/4/testReport/ |
   | Max. process+thread count | 968 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5394/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yet

[GitHub] [hadoop] hadoop-yetus commented on pull request #5335: YARN-11426. Improve YARN NodeLabel Memory Display.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5335:
URL: https://github.com/apache/hadoop/pull/5335#issuecomment-1439421406

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 37s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  30m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 20s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 23s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   9m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   8m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 11s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   4m 26s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 16s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 14s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  98m 50s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 261m 50s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5335/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5335 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint spotbugs checkstyle |
   | uname | Linux 3fda2652e755 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / e4e13558f59ba37b75301fef1ce2f9a01fd68e1f |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5335/9/testReport/ |
   | Max. process+thread count | 1006 (vs. ulimit of 5500) |
   | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: hadoop-yarn-project/hadoop-yarn |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5335/9/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | P

[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113830663


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/AsyncRFAAppender.java:
##
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.util;
+
+import java.io.IOException;
+
+import org.apache.log4j.AsyncAppender;
+import org.apache.log4j.PatternLayout;
+import org.apache.log4j.RollingFileAppender;
+import org.apache.log4j.spi.LoggingEvent;
+
+/**
+ * Until we migrate to log4j2, use this appender for namenode audit logger as 
well as
+ * datanode and namenode metric loggers with log4j1 properties.
+ * This appender will take parameters necessary to supply RollingFileAppender 
to AsyncAppender.
+ * While migrating to log4j2, we can directly specify appender ref to Async 
appender.
+ */
+public class AsyncRFAAppender extends AsyncAppender {

Review Comment:
   With this patch, even though we have new custom appender, it's only 
temporary here.
   
   This solves two purposes:
   
   1. When we migrate from log4j to log4j2 properties, we can directly use 
log4j2 Async appender and provide appender ref as other appenders (i.e. wrap 
RFA appender to Async appender). Hence, this custom appender no longer needs to 
be migrated to log4j2, rather we can remove it cleanly.
   2. Before migrating to log4j2, we still have a way to wrap RFA in Async 
appender without touching the code. We can directly refer to this custom 
appender in log4j.properties (as done with this patch)
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691923#comment-17691923
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113830663


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/AsyncRFAAppender.java:
##
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.util;
+
+import java.io.IOException;
+
+import org.apache.log4j.AsyncAppender;
+import org.apache.log4j.PatternLayout;
+import org.apache.log4j.RollingFileAppender;
+import org.apache.log4j.spi.LoggingEvent;
+
+/**
+ * Until we migrate to log4j2, use this appender for namenode audit logger as 
well as
+ * datanode and namenode metric loggers with log4j1 properties.
+ * This appender will take parameters necessary to supply RollingFileAppender 
to AsyncAppender.
+ * While migrating to log4j2, we can directly specify appender ref to Async 
appender.
+ */
+public class AsyncRFAAppender extends AsyncAppender {

Review Comment:
   With this patch, even though we have new custom appender, it's only 
temporary here.
   
   This solves two purposes:
   
   1. When we migrate from log4j to log4j2 properties, we can directly use 
log4j2 Async appender and provide appender ref as other appenders (i.e. wrap 
RFA appender to Async appender). Hence, this custom appender no longer needs to 
be migrated to log4j2, rather we can remove it cleanly.
   2. Before migrating to log4j2, we still have a way to wrap RFA in Async 
appender without touching the code. We can directly refer to this custom 
appender in log4j.properties (as done with this patch)
   





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113830663


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/AsyncRFAAppender.java:
##
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.util;
+
+import java.io.IOException;
+
+import org.apache.log4j.AsyncAppender;
+import org.apache.log4j.PatternLayout;
+import org.apache.log4j.RollingFileAppender;
+import org.apache.log4j.spi.LoggingEvent;
+
+/**
+ * Until we migrate to log4j2, use this appender for namenode audit logger as 
well as
+ * datanode and namenode metric loggers with log4j1 properties.
+ * This appender will take parameters necessary to supply RollingFileAppender 
to AsyncAppender.
+ * While migrating to log4j2, we can directly specify appender ref to Async 
appender.
+ */
+public class AsyncRFAAppender extends AsyncAppender {

Review Comment:
   With this patch, even though we have new custom appender, it's only 
temporary here.
   
   This solves two purposes:
   
   1. When we migrate from log4j to log4j2 properties, we can directly use 
log4j2 Async appender and provide appender ref as other appenders (i.e. wrap 
RFA appender to Async appender). Hence, this custom appender no longer needs to 
be migrated to log4j2, rather we can remove it cleanly.
   2. Before migrating to log4j2, we still have a way to wrap RFA in Async 
appender without touching the code. We can directly refer to this custom 
appender in log4j.properties (as done with this patch)
   
   It's temporary but necessary until going to log4j2, so that we don't loose 
functionality of wrapping RFA in Async appender.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691924#comment-17691924
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113830663


##
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/util/AsyncRFAAppender.java:
##
@@ -0,0 +1,139 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.hdfs.util;
+
+import java.io.IOException;
+
+import org.apache.log4j.AsyncAppender;
+import org.apache.log4j.PatternLayout;
+import org.apache.log4j.RollingFileAppender;
+import org.apache.log4j.spi.LoggingEvent;
+
+/**
+ * Until we migrate to log4j2, use this appender for namenode audit logger as 
well as
+ * datanode and namenode metric loggers with log4j1 properties.
+ * This appender will take parameters necessary to supply RollingFileAppender 
to AsyncAppender.
+ * While migrating to log4j2, we can directly specify appender ref to Async 
appender.
+ */
+public class AsyncRFAAppender extends AsyncAppender {

Review Comment:
   With this patch, even though we have new custom appender, it's only 
temporary here.
   
   This solves two purposes:
   
   1. When we migrate from log4j to log4j2 properties, we can directly use 
log4j2 Async appender and provide appender ref as other appenders (i.e. wrap 
RFA appender to Async appender). Hence, this custom appender no longer needs to 
be migrated to log4j2, rather we can remove it cleanly.
   2. Before migrating to log4j2, we still have a way to wrap RFA in Async 
appender without touching the code. We can directly refer to this custom 
appender in log4j.properties (as done with this patch)
   
   It's temporary but necessary until going to log4j2, so that we don't loose 
functionality of wrapping RFA in Async appender.





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5312: YARN-11375. [Federation] Support refreshAdminAcls、refreshServiceAcls API's for Federation.

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5312:
URL: https://github.com/apache/hadoop/pull/5312#issuecomment-1439436708

   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 49s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +0 :ok: |  buf  |   0m  0s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 12s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  31m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   9m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   8m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m  6s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 46s |  |  trunk passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   3m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  24m 19s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   9m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  cc  |   9m  2s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   9m  2s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   8m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  cc  |   8m 29s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   8m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m 39s |  |  
hadoop-yarn-project/hadoop-yarn: The patch generated 0 new + 7 unchanged - 9 
fixed = 7 total (was 16)  |
   | +1 :green_heart: |  mvnsite  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   3m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   7m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 12s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 43s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  99m 47s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 44s |  |  hadoop-yarn-server-router in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 30s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5312/13/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5312 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets cc buflint 
bufcompat |
   | uname | Linux 47de842257f8 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 2959e4c4f03c5daa79aee78af28044edb4461368 |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib

[GitHub] [hadoop] virajjasani commented on a diff in pull request #5418: HADOOP-18631. Migrate Async appenders to log4j properties

2023-02-21 Thread via GitHub


virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113770486


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   In fact, this is the reason why i have added some TODO comments with this 
patch as we anyways need to rework on async appender config as part of log4j2 
properties. This TODO will help to use appender ref within AsyncAppender in 
log4j2.properties. Once that work is complete, this TODO also needs to be 
removed as part of it.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18631) Migrate Async appenders to log4j properties

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691926#comment-17691926
 ] 

ASF GitHub Bot commented on HADOOP-18631:
-

virajjasani commented on code in PR #5418:
URL: https://github.com/apache/hadoop/pull/5418#discussion_r1113770486


##
hadoop-hdfs-project/hadoop-hdfs/src/test/resources/log4j.properties:
##
@@ -49,4 +56,24 @@ log4j.appender.DNMETRICSRFA.MaxBackupIndex=1
 log4j.appender.DNMETRICSRFA.MaxFileSize=64MB
 
 # Supress KMS error log
-log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
\ No newline at end of file
+log4j.logger.com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator=OFF
+
+#
+# hdfs audit logging
+#
+
+# TODO : log4j2 properties to provide example for using Async appender with 
other appenders
+hdfs.audit.logger=INFO,ASYNCAPPENDER,RFAAUDIT
+hdfs.audit.log.maxfilesize=256MB
+hdfs.audit.log.maxbackupindex=20
+log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
+log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
+log4j.appender.RFAAUDIT=org.apache.log4j.RollingFileAppender
+log4j.appender.RFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
+log4j.appender.RFAAUDIT.layout=org.apache.log4j.PatternLayout
+log4j.appender.RFAAUDIT.layout.ConversionPattern=%m%n
+log4j.appender.RFAAUDIT.MaxFileSize=${hdfs.audit.log.maxfilesize}
+log4j.appender.RFAAUDIT.MaxBackupIndex=${hdfs.audit.log.maxbackupindex}
+log4j.appender.ASYNCAPPENDER=org.apache.log4j.AsyncAppender

Review Comment:
   In fact, this is the reason why i have added some TODO comments with this 
patch as we anyways need to rework on async appender config as part of log4j2 
properties. This TODO will help to use appender ref within AsyncAppender in 
log4j2.properties. Once that work is complete, this TODO also needs to be 
removed as part of it.





> Migrate Async appenders to log4j properties
> ---
>
> Key: HADOOP-18631
> URL: https://issues.apache.org/jira/browse/HADOOP-18631
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>
> Before we can upgrade to log4j2, we need to migrate async appenders that we 
> add "dynamically in the code" to the log4j.properties file. Instead of using 
> core/hdfs site configs, log4j properties or system properties should be used 
> to determine if the given logger should use async appender.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet opened a new pull request, #5422: HADOOP-18633. fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip

2023-02-21 Thread via GitHub


mehakmeet opened a new pull request, #5422:
URL: https://github.com/apache/hadoop/pull/5422

   ### Description of PR
   Fixing the test `AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip`
   
   ### How was this patch tested?
   Tested on s3a, abfs and local fs.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-18633) fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip

2023-02-21 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-18633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17691945#comment-17691945
 ] 

ASF GitHub Bot commented on HADOOP-18633:
-

mehakmeet opened a new pull request, #5422:
URL: https://github.com/apache/hadoop/pull/5422

   ### Description of PR
   Fixing the test `AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip`
   
   ### How was this patch tested?
   Tested on s3a, abfs and local fs.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> fix test AbstractContractDistCpTest#testDistCpUpdateCheckFileSkip 
> --
>
> Key: HADOOP-18633
> URL: https://issues.apache.org/jira/browse/HADOOP-18633
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>
> In the newly introduced test testDistCpUpdateCheckFileSkip, for the first 
> pass of "distcp -update", target file should not be present so that the copy 
> takes place and creates the target file. 
> Currently, we create both the source and target file with same block size 
> from the start which can lead to flakiness due to race condition causing the 
> modification time of the target file to be greater than/equal to the source 
> and not copy the file at all. This can be seen more in the 
> TestLocalContractDistCp due to no remote calls to create the target.
> {code:java}
> java.lang.AssertionError: Mismatch in COPY counter value expected:<1> but 
> was:<0>
>   at org.junit.Assert.fail(Assert.java:89)
>   at org.junit.Assert.failNotEquals(Assert.java:835)
>   at org.junit.Assert.assertEquals(Assert.java:647)
>   at 
> org.apache.hadoop.tools.contract.AbstractContractDistCpTest.verifySkipAndCopyCounter(AbstractContractDistCpTest.java:1000)
>   at 
> org.apache.hadoop.tools.contract.AbstractContractDistCpTest.testDistCpUpdateCheckFileSkip(AbstractContractDistCpTest.java:919)
>  {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hfutatzhanghb opened a new pull request, #5423: HDFS-16882. RBF: Add cache hit rate metric in MountTableResolver#getD…

2023-02-21 Thread via GitHub


hfutatzhanghb opened a new pull request, #5423:
URL: https://github.com/apache/hadoop/pull/5423

   please see https://github.com/apache/hadoop/pull/5276.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hfutatzhanghb commented on pull request #5423: HDFS-16882. RBF: Add cache hit rate metric in MountTableResolver#getDestinationForPath

2023-02-21 Thread via GitHub


hfutatzhanghb commented on PR #5423:
URL: https://github.com/apache/hadoop/pull/5423#issuecomment-1439505723

   hi , @tomscut .  i have backport the changes to branch-3.3.  please take a 
look. thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #5337: MAPREDUCE-7386. Maven parallel builds (skipping tests) fail

2023-02-21 Thread via GitHub


hadoop-yetus commented on PR #5337:
URL: https://github.com/apache/hadoop/pull/5337#issuecomment-1439513930

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 17s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  0s |  |  detect-secrets was not available.  
|
   | +0 :ok: |  xmllint  |   0m  0s |  |  xmllint was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m  0s |  |  Maven dependency ordering for branch  |
   | -1 :x: |  mvninstall  |  27m 15s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5337/2/artifact/out/branch-mvninstall-root.txt)
 |  root in branch-3.3 failed.  |
   | +1 :green_heart: |  compile  |  18m 15s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   8m  7s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   4m 24s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  99m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  10m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |  17m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   7m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   4m 27s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  38m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 22s |  |  hadoop-project in the patch 
passed.  |
   | -1 :x: |  unit  | 217m 34s | 
[/patch-unit-hadoop-yarn-project.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5337/2/artifact/out/patch-unit-hadoop-yarn-project.txt)
 |  hadoop-yarn-project in the patch passed.  |
   | +1 :green_heart: |  unit  | 156m 58s |  |  hadoop-mapreduce-project in the 
patch passed.  |
   | +1 :green_heart: |  unit  |   0m 50s |  |  hadoop-dist in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 15s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 546m 22s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.server.nodemanager.amrmproxy.TestFederationInterceptor |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5337/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5337 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell detsecrets xmllint |
   | uname | Linux be470f3535b4 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 
18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / f548c36c44b2bdd27d2084001216487da970db1d |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~18.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5337/2/testReport/ |
   | Max. process+thread count | 1641 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-yarn-project hadoop-mapreduce-project 
hadoop-dist U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5337/2/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: commo

[GitHub] [hadoop] tomicooler commented on pull request #5311: MAPREDUCE-7431. ShuffleHandler refactor and fix after Netty4 upgrade.

2023-02-21 Thread via GitHub


tomicooler commented on PR #5311:
URL: https://github.com/apache/hadoop/pull/5311#issuecomment-1439525148

   > the test is failing now: 
https://ci-hadoop.apache.org/view/Hadoop/job/hadoop-qbt-trunk-java8-linux-x86_64/1143/testReport/junit/org.apache.hadoop.mapred/TestShuffleHandler/testMapFileAccess/
   > 
   > Though it passes locally for me. Looks like something specific to jenkins:
   > 
   > ```
   > Caused by: java.io.IOException: Owner 'jenkins' for path 
/tmp/test-shuffle-channel-handler1169959588626711767/job_1_0001/testUser/attempt_1_0001_m_01_0/index
 did not match expected owner 'testUser'
   > ```
   > 
   > anyone planning to fix or should we revert this?
   
   [MAPREDUCE-7434](https://issues.apache.org/jira/browse/MAPREDUCE-7434)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >