[GitHub] [hadoop] hadoop-yetus commented on pull request #3014: HDFS-16026. Restore cross platform mkstemp

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3014:
URL: https://github.com/apache/hadoop/pull/3014#issuecomment-842850043


   (!) A patch to the testing environment has been detected. 
   Re-executing against the patched versions to perform further tests. 
   The console is at 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/15/console in 
case of problems.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17705) S3A to add Config to set AWS region

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17705?focusedWorklogId=598432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598432
 ]

ASF GitHub Bot logged work on HADOOP-17705:
---

Author: ASF GitHub Bot
Created on: 18/May/21 05:19
Start Date: 18/May/21 05:19
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #3020:
URL: https://github.com/apache/hadoop/pull/3020#issuecomment-842847895


   LGTM, will wait for yetus to become green. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598432)
Time Spent: 50m  (was: 40m)

> S3A to add Config to set AWS region
> ---
>
> Key: HADOOP-17705
> URL: https://issues.apache.org/jira/browse/HADOOP-17705
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, AWS region is either constructed via the endpoint URL, by making 
> an assumption that the 2nd component after delimiter "." is the region in 
> endpoint URL, which doesn't work for private links and sets the default to 
> us-east-1 thus causing authorization issue w.r.t the private link.
> Proposed: An AWS region config which when set can bypass the construction of 
> region from the endpoint URL. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #3020: HADOOP-17705. S3A to add Config to set AWS region

2021-05-17 Thread GitBox


mukund-thakur commented on pull request #3020:
URL: https://github.com/apache/hadoop/pull/3020#issuecomment-842847895


   LGTM, will wait for yetus to become green. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17699?focusedWorklogId=598415=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598415
 ]

ASF GitHub Bot logged work on HADOOP-17699:
---

Author: ASF GitHub Bot
Created on: 18/May/21 04:12
Start Date: 18/May/21 04:12
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#discussion_r634023560



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
##
@@ -99,7 +101,13 @@
   public static final String SSL_SERVER_EXCLUDE_CIPHER_LIST =
   "ssl.server.exclude.cipher.list";
 
-  public static final String SSLCERTIFICATE = IBM_JAVA?"ibmX509":"SunX509";
+  public static final String KEY_MANAGER_SSLCERTIFICATE =

Review comment:
   we removed a public static final variable here. Search at Apache github, 
no Apache project use Hadoop's SSLFactory. SSLCERTIFICATE, so we're probably 
fine here.

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestSSLFactory.java
##
@@ -367,6 +369,20 @@ public void invalidHostnameVerifier() throws Exception {
 }
   }
 
+  @Test
+  public void testDifferentAlgorithm() throws Exception {
+Configuration conf = createConfiguration(false, true);
+String currAlg = getProperty("ssl.KeyManagerFactory.algorithm");

Review comment:
   The property is used by JDK API KeyManagerFactory#getDefaultAlgorithm()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598415)
Time Spent: 1h 20m  (was: 1h 10m)

> Remove hardcoded SunX509 usage from SSLFactory
> --
>
> Key: HADOOP-17699
> URL: https://issues.apache.org/jira/browse/HADOOP-17699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and 
> ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which 
> is used to get a KeyManager/TrustManager. This KeyManager type might not be 
> available if using the other JSSE providers, e.g.,  in FIPS deployment.
>  
> {code:java}
> WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized 
> ssl related configuration. Fall
>  back to system-generic settings.
>  java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not 
> available
>  at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
>  at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137)
>  at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186)
>  at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187)
>  at 
> org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79)
> {code}
> This ticket is opened to use the DefaultAlgorithm defined by Java system 
> property: 
> ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #3016: HADOOP-17699. Remove hardcoded SunX509 usage from SSLFactory.

2021-05-17 Thread GitBox


jojochuang commented on a change in pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#discussion_r634023560



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
##
@@ -99,7 +101,13 @@
   public static final String SSL_SERVER_EXCLUDE_CIPHER_LIST =
   "ssl.server.exclude.cipher.list";
 
-  public static final String SSLCERTIFICATE = IBM_JAVA?"ibmX509":"SunX509";
+  public static final String KEY_MANAGER_SSLCERTIFICATE =

Review comment:
   we removed a public static final variable here. Search at Apache github, 
no Apache project use Hadoop's SSLFactory. SSLCERTIFICATE, so we're probably 
fine here.

##
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestSSLFactory.java
##
@@ -367,6 +369,20 @@ public void invalidHostnameVerifier() throws Exception {
 }
   }
 
+  @Test
+  public void testDifferentAlgorithm() throws Exception {
+Configuration conf = createConfiguration(false, true);
+String currAlg = getProperty("ssl.KeyManagerFactory.algorithm");

Review comment:
   The property is used by JDK API KeyManagerFactory#getDefaultAlgorithm()




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a change in pull request #2985: HADOOP-17115. Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2021-05-17 Thread GitBox


virajjasani commented on a change in pull request #2985:
URL: https://github.com/apache/hadoop/pull/2985#discussion_r634021445



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Sets.java
##
@@ -0,0 +1,329 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashSet;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Static utility methods pertaining to {@link Set} instances.
+ * This class is Hadoop's internal use alternative to Guava's Sets
+ * utility class.
+ */
+@InterfaceAudience.Private
+public final class Sets {
+
+  private Sets() {
+// empty
+  }
+
+  /**
+   * Creates a mutable, initially empty {@code HashSet} instance.
+   *
+   * Note: if mutability is not required, use ImmutableSet#of()
+   * instead. If {@code E} is an {@link Enum} type, use {@link EnumSet#noneOf}
+   * instead. Otherwise, strongly consider using a {@code LinkedHashSet}
+   * instead, at the cost of increased memory footprint, to get
+   * deterministic iteration behavior.
+   */
+  public static  HashSet newHashSet() {
+return new HashSet();
+  }
+
+  /**
+   * Creates a mutable, empty {@code TreeSet} instance sorted by the
+   * natural sort ordering of its elements.
+   *
+   * Note: if mutability is not required, use ImmutableSortedSet#of()
+   * instead.
+   *
+   * @return a new, empty {@code TreeSet}
+   */
+  public static  TreeSet newTreeSet() {
+return new TreeSet();
+  }

Review comment:
   Sure thing, once this lands, I will create these sub-tasks:
   1. Replace Guava Sets by Hadoop's own Sets for each module: HDFS, Yarn, 
MapReduce
   2. Replace Sets#newHashSet and Sets#newTreeSet by directly using the 
respective constructors (with label: beginners) for each module: HDFS, Hadoop 
Common, Hadoop Tools, Yarn, MapReduce




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17703) checkcompatibility.py errors out when specifying annotations

2021-05-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17703.
--
Fix Version/s: 3.2.3
   3.1.5
   3.4.0
   3.3.1
   Resolution: Fixed

> checkcompatibility.py errors out when specifying annotations
> 
>
> Key: HADOOP-17703
> URL: https://issues.apache.org/jira/browse/HADOOP-17703
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop/blob/trunk/dev-support/bin/checkcompatibility.py#L178]
> {code:java}
>  with file(annotations_path, "w") as f: {code}
> is not a valid Pythonic code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17703) checkcompatibility.py errors out when specifying annotations

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17703?focusedWorklogId=598409=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598409
 ]

ASF GitHub Bot logged work on HADOOP-17703:
---

Author: ASF GitHub Bot
Created on: 18/May/21 03:22
Start Date: 18/May/21 03:22
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #3017:
URL: https://github.com/apache/hadoop/pull/3017


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598409)
Time Spent: 1h 10m  (was: 1h)

> checkcompatibility.py errors out when specifying annotations
> 
>
> Key: HADOOP-17703
> URL: https://issues.apache.org/jira/browse/HADOOP-17703
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop/blob/trunk/dev-support/bin/checkcompatibility.py#L178]
> {code:java}
>  with file(annotations_path, "w") as f: {code}
> is not a valid Pythonic code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #3017: HADOOP-17703. checkcompatibility.py errors out when specifying annotations.

2021-05-17 Thread GitBox


jojochuang merged pull request #3017:
URL: https://github.com/apache/hadoop/pull/3017


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17666) Update LICENSE for 3.3.1

2021-05-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-17666:
-
Labels: pull-request-available release-blocker  (was: 
pull-request-available)

> Update LICENSE for 3.3.1
> 
>
> Key: HADOOP-17666
> URL: https://issues.apache.org/jira/browse/HADOOP-17666
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available, release-blocker
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Before release, do another round of check for the LICENSE file to make sure 
> the dependency versions are updated correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17666) Update LICENSE for 3.3.1

2021-05-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-17666:


Target Version/s: 3.3.1
Assignee: Wei-Chiu Chuang
Priority: Blocker  (was: Major)

> Update LICENSE for 3.3.1
> 
>
> Key: HADOOP-17666
> URL: https://issues.apache.org/jira/browse/HADOOP-17666
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Blocker
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Before release, do another round of check for the LICENSE file to make sure 
> the dependency versions are updated correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17707) Remove jaeger document from site index

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17707:

Labels: pull-request-available  (was: )

> Remove jaeger document from site index
> --
>
> Key: HADOOP-17707
> URL: https://issues.apache.org/jira/browse/HADOOP-17707
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Takanobu Asanuma
>Assignee: Takanobu Asanuma
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17707) Remove jaeger document from site index

2021-05-17 Thread Takanobu Asanuma (Jira)
Takanobu Asanuma created HADOOP-17707:
-

 Summary: Remove jaeger document from site index
 Key: HADOOP-17707
 URL: https://issues.apache.org/jira/browse/HADOOP-17707
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Takanobu Asanuma
Assignee: Takanobu Asanuma






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomscut opened a new pull request #3021: HDFS-15814. Make some parameters configurable for DataNodeDiskMetrics…

2021-05-17 Thread GitBox


tomscut opened a new pull request #3021:
URL: https://github.com/apache/hadoop/pull/3021


   JIRA: [HDFS-15814](https://issues.apache.org/jira/browse/HDFS-15814)
   
   This change was fixed in trunk (3.4.0). The commit does not apply cleanly to 
branch-3.3, so I create a new PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #3009: HDFS-16024: RBF: Rename data to the Trash should be based on src loca…

2021-05-17 Thread GitBox


ferhui commented on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-842789126


   @zhuxiangyi the case i mentioned is common, not special. If you delete 
directory, the baseTrashPath(even if TrashRoot exists) would be created (first 
time). Anyway, I think 1st way is better than 2nd above 2 ways you mentioned, 
because if users can create baseTrashPath(using real namespace?), they also can 
delete them directly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhuxiangyi commented on pull request #3009: HDFS-16024: RBF: Rename data to the Trash should be based on src loca…

2021-05-17 Thread GitBox


zhuxiangyi commented on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-842779150


   @ferhui Sorry to make you confused, but isn't the problem we discussed above 
the problem that this PR should solve? For the special situation you mentioned, 
TrashRoot does not exist, I made two suggestions. @ferhui @goiri What do you 
think of doing this?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17704) HADOOP-16916 changed interface SASTokenProvider fields, breaking compatibility between 3.3.0 and 3.3.1

2021-05-17 Thread Thomas Marqardt (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17346535#comment-17346535
 ] 

Thomas Marqardt commented on HADOOP-17704:
--

It's unfortunate that an error was made and HADOOP-16730 was committed in 3.3.0 
instead of 3.3.1. 

The SASTokenProvider interface and implementation was a collaboration between 
Microsoft and Cloudera, and CDP has a dependency on the latest version of the 
SASTokenProvider interface (not the initial one).  The interface is attributed 
with @InterfaceStability.Unstable, and other than CDP I'm not aware of it being 
used.  The Apache Ranger source code does not use this interface, as far as I 
can tell, probably the source used by CDP has not yet been shared with the 
community.  Also, prior to HADOOP-16916 the implementation had a few issues, so 
it is extremely unlikely that anyone took a dependency.   You pointed out the 
interface change in this JIRA, but the underlying implementation was also 
changed in HADOOP-16916.  Attempting to fix the breaking change would be quite 
ugly, resulting in two underlying code paths, two interfaces (SASTokenProvider 
and SASTokenProvider2), and two sets of tests.  Since CDP needs the latest and 
I'm not aware of anyone else using this, I think the risk of breaking users is 
very, very low and we should not consider this a blocker for 3.3.1, but instead 
leave it as-is and resolve this JIRA.  

> HADOOP-16916 changed interface SASTokenProvider fields, breaking 
> compatibility between 3.3.0 and 3.3.1 
> ---
>
> Key: HADOOP-17704
> URL: https://issues.apache.org/jira/browse/HADOOP-17704
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Wei-Chiu Chuang
>Priority: Blocker
>
> I understand HADOOP-16730/HADOOP-16916 is specifically made for Ranger, but I 
> am not sure how Ranger consumes this feature. The interface SASTokenProvider 
> has a number of member fields that changed variable names in HADOOP-16916, 
> breaking the compatibility between 3.3.0 and 3.3.1.
> As a matter of fact, the feature HADOOP-16730 itself was merged in 3.3.0 not 
> 3.3.1. I just corrected it today.
> Raise this jira and mark it as a blocker for 3.3.1. But if this isn't a big 
> deal then we can downgrade, because, well, this feature was not officially in 
> the 3.3.0 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17669) Port HADOOP-17079, HADOOP-17505 to branch-3.3

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17669?focusedWorklogId=598392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598392
 ]

ASF GitHub Bot logged work on HADOOP-17669:
---

Author: ASF GitHub Bot
Created on: 18/May/21 01:58
Start Date: 18/May/21 01:58
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #2959:
URL: https://github.com/apache/hadoop/pull/2959


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598392)
Time Spent: 1h  (was: 50m)

> Port HADOOP-17079, HADOOP-17505  to branch-3.3
> --
>
> Key: HADOOP-17669
> URL: https://issues.apache.org/jira/browse/HADOOP-17669
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2959: HADOOP-17669. Backport HADOOP-17079, HADOOP-17505 to branch-3.3

2021-05-17 Thread GitBox


jojochuang merged pull request #2959:
URL: https://github.com/apache/hadoop/pull/2959


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17669) Port HADOOP-17079, HADOOP-17505 to branch-3.3

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17669?focusedWorklogId=598391=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598391
 ]

ASF GitHub Bot logged work on HADOOP-17669:
---

Author: ASF GitHub Bot
Created on: 18/May/21 01:57
Start Date: 18/May/21 01:57
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #2959:
URL: https://github.com/apache/hadoop/pull/2959#issuecomment-842764639


   All failed tests do not repro. I'll merge this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598391)
Time Spent: 50m  (was: 40m)

> Port HADOOP-17079, HADOOP-17505  to branch-3.3
> --
>
> Key: HADOOP-17669
> URL: https://issues.apache.org/jira/browse/HADOOP-17669
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2959: HADOOP-17669. Backport HADOOP-17079, HADOOP-17505 to branch-3.3

2021-05-17 Thread GitBox


jojochuang commented on pull request #2959:
URL: https://github.com/apache/hadoop/pull/2959#issuecomment-842764639


   All failed tests do not repro. I'll merge this PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #3009: HDFS-16024: RBF: Rename data to the Trash should be based on src loca…

2021-05-17 Thread GitBox


ferhui commented on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-842747337


   @zhuxiangyi So I get a bit confused,  what is the problem that this PR  
resolves ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17706) Problem in installation of Hadoop 3.2 with docket- libc-bin bug

2021-05-17 Thread Yuval Rochman (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuval Rochman updated HADOOP-17706:
---
Affects Version/s: 3.2.2
 Priority: Blocker  (was: Major)

> Problem in installation of Hadoop 3.2 with docket-  libc-bin  bug 
> --
>
> Key: HADOOP-17706
> URL: https://issues.apache.org/jira/browse/HADOOP-17706
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.2.2
>Reporter: Yuval Rochman
>Priority: Blocker
>
> Hi, 
> I got the following bug while installing Hadoop 3.2.2 with docker:
> Processing triggers for libc-bin (2.23-0ubuntu11.2) ...
> WARN engine npm@7.13.0: wanted: \{"node":">=10"} (current: 
> \{"node":"4.2.6","npm":"3.5.2"})
> WARN engine npm@7.13.0: wanted: \{"node":">=10"} (current: 
> \{"node":"4.2.6","npm":"3.5.2"})
> /usr/local/lib
> `-- (empty)
> npm ERR! Linux 4.15.0-29-generic
> npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "npm@latest" "-g"
> npm ERR! node v4.2.6
> npm ERR! npm v3.5.2
> npm ERR! path /usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552
> npm ERR! code ENOENT
> npm ERR! errno -2
> npm ERR! syscall rename
> npm ERR! enoent ENOENT: no such file or directory, rename 
> '/usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552' -> 
> '/usr/local/lib/node_modules/npm/node_modules/@npmcli/ci-detect'
> npm ERR! enoent ENOENT: no such file or directory, rename 
> '/usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552' -> 
> '/usr/local/lib/node_modules/npm/node_modules/@npmcli/ci-detect'
> npm ERR! enoent This is most likely not a problem with npm itself
> npm ERR! enoent and is related to npm not being able to find a file.
> npm ERR! enoent
> npm ERR! Please include the following file with any support request:
> npm ERR! /root/npm-debug.log
> npm ERR! code 1
>  
>  
> What can I do?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17701) HDFS rebalance commands

2021-05-17 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-17701.
--
Resolution: Invalid

Please use the [u...@hadoop.apache.org|mailto:u...@hadoop.apache.org] for usage 
questions.

> HDFS rebalance commands
> ---
>
> Key: HADOOP-17701
> URL: https://issues.apache.org/jira/browse/HADOOP-17701
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hadoop-thirdparty
>Reporter: satya kiran
>Priority: Minor
>
> Team ,
> Having 3 node cluster and one of the node 100% disk utilized . when i am 
> trying to run rebalance commands by cli using (hdfs balancer -source 
> 192.168.x.x) and even from ambari its not releasing space . Could you please  
> help me on this 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17706) Problem in installation of Hadoop 3.2 with docket- libc-bin bug

2021-05-17 Thread Yuval Rochman (Jira)
Yuval Rochman created HADOOP-17706:
--

 Summary: Problem in installation of Hadoop 3.2 with docket-  
libc-bin  bug 
 Key: HADOOP-17706
 URL: https://issues.apache.org/jira/browse/HADOOP-17706
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Yuval Rochman


Hi, 

I got the following bug while installing Hadoop 3.2.2 with docker:

Processing triggers for libc-bin (2.23-0ubuntu11.2) ...
WARN engine npm@7.13.0: wanted: \{"node":">=10"} (current: 
\{"node":"4.2.6","npm":"3.5.2"})
WARN engine npm@7.13.0: wanted: \{"node":">=10"} (current: 
\{"node":"4.2.6","npm":"3.5.2"})
/usr/local/lib
`-- (empty)

npm ERR! Linux 4.15.0-29-generic
npm ERR! argv "/usr/bin/nodejs" "/usr/bin/npm" "install" "npm@latest" "-g"
npm ERR! node v4.2.6
npm ERR! npm v3.5.2
npm ERR! path /usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552
npm ERR! code ENOENT
npm ERR! errno -2
npm ERR! syscall rename

npm ERR! enoent ENOENT: no such file or directory, rename 
'/usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552' -> 
'/usr/local/lib/node_modules/npm/node_modules/@npmcli/ci-detect'
npm ERR! enoent ENOENT: no such file or directory, rename 
'/usr/local/lib/node_modules/.staging/@npmcli/ci-detect-c7bf9552' -> 
'/usr/local/lib/node_modules/npm/node_modules/@npmcli/ci-detect'
npm ERR! enoent This is most likely not a problem with npm itself
npm ERR! enoent and is related to npm not being able to find a file.
npm ERR! enoent

npm ERR! Please include the following file with any support request:
npm ERR! /root/npm-debug.log
npm ERR! code 1

 

 

What can I do?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17699?focusedWorklogId=598344=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598344
 ]

ASF GitHub Bot logged work on HADOOP-17699:
---

Author: ASF GitHub Bot
Created on: 17/May/21 23:45
Start Date: 17/May/21 23:45
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao commented on pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#issuecomment-842716012


   cc: @jojochuang 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598344)
Time Spent: 1h 10m  (was: 1h)

> Remove hardcoded SunX509 usage from SSLFactory
> --
>
> Key: HADOOP-17699
> URL: https://issues.apache.org/jira/browse/HADOOP-17699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and 
> ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which 
> is used to get a KeyManager/TrustManager. This KeyManager type might not be 
> available if using the other JSSE providers, e.g.,  in FIPS deployment.
>  
> {code:java}
> WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized 
> ssl related configuration. Fall
>  back to system-generic settings.
>  java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not 
> available
>  at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
>  at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137)
>  at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186)
>  at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187)
>  at 
> org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79)
> {code}
> This ticket is opened to use the DefaultAlgorithm defined by Java system 
> property: 
> ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on pull request #3016: HADOOP-17699. Remove hardcoded SunX509 usage from SSLFactory.

2021-05-17 Thread GitBox


xiaoyuyao commented on pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#issuecomment-842716012


   cc: @jojochuang 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17699?focusedWorklogId=598271=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598271
 ]

ASF GitHub Bot logged work on HADOOP-17699:
---

Author: ASF GitHub Bot
Created on: 17/May/21 21:24
Start Date: 17/May/21 21:24
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#issuecomment-842651044


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 67 unchanged - 
1 fixed = 67 total (was 68)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 187m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3016/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3016 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e7c0d138c617 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c7a9b9760f48a4860d9f158f582ad55340e66830 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3016/3/testReport/ |
   | Max. process+thread count | 1252 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3016/3/console |
   | versions | git=2.25.1 maven=3.6.3 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3016: HADOOP-17699. Remove hardcoded SunX509 usage from SSLFactory.

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#issuecomment-842651044


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m  9s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 59s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 48s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 58s |  |  
hadoop-common-project/hadoop-common: The patch generated 0 new + 67 unchanged - 
1 fixed = 67 total (was 68)  |
   | +1 :green_heart: |  mvnsite  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 58s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 35s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 187m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3016/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3016 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e7c0d138c617 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / c7a9b9760f48a4860d9f158f582ad55340e66830 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3016/3/testReport/ |
   | Max. process+thread count | 1252 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3016/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?focusedWorklogId=598270=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598270
 ]

ASF GitHub Bot logged work on HADOOP-17115:
---

Author: ASF GitHub Bot
Created on: 17/May/21 21:14
Start Date: 17/May/21 21:14
Worklog Time Spent: 10m 
  Work Description: busbey commented on a change in pull request #2985:
URL: https://github.com/apache/hadoop/pull/2985#discussion_r633869080



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Sets.java
##
@@ -0,0 +1,376 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Static utility methods pertaining to {@link Set} instances.
+ * This class is Hadoop's internal use alternative to Guava's Sets
+ * utility class.
+ * Javadocs for majority of APIs in this class are taken from Guava's Sets
+ * class.
+ */
+@InterfaceAudience.Private
+public final class Sets {

Review comment:
   please indicate a specific Guava release said javadocs are from.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Sets.java
##
@@ -0,0 +1,376 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Static utility methods pertaining to {@link Set} instances.
+ * This class is Hadoop's internal use alternative to Guava's Sets
+ * utility class.
+ * Javadocs for majority of APIs in this class are taken from Guava's Sets
+ * class.
+ */
+@InterfaceAudience.Private
+public final class Sets {
+
+  private static final int MAX_POWER_OF_TWO = 1 << (Integer.SIZE - 2);
+
+  private Sets() {
+// empty
+  }
+
+  /**
+   * Creates a mutable, initially empty {@code HashSet} instance.
+   *
+   * Note: if mutability is not required, use ImmutableSet#of()
+   * instead. If {@code E} is an {@link Enum} type, use {@link EnumSet#noneOf}
+   * instead. Otherwise, strongly consider using a {@code LinkedHashSet}
+   * instead, at the cost of increased memory footprint, to get
+   * deterministic iteration behavior.
+   */
+  public static  HashSet newHashSet() {
+return new HashSet();
+  }
+
+  /**
+   * Creates a mutable, empty {@code TreeSet} instance sorted by the
+   * natural sort ordering of its elements.
+   *
+   * Note: if mutability is not required, use ImmutableSortedSet#of()
+   * instead.
+   *
+   * @return a new, empty {@code TreeSet}
+   */
+  public static  TreeSet newTreeSet() {
+return new TreeSet();
+  }
+
+  /**
+   * Creates a mutable {@code 

[GitHub] [hadoop] busbey commented on a change in pull request #2985: HADOOP-17115. Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2021-05-17 Thread GitBox


busbey commented on a change in pull request #2985:
URL: https://github.com/apache/hadoop/pull/2985#discussion_r633869080



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Sets.java
##
@@ -0,0 +1,376 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Static utility methods pertaining to {@link Set} instances.
+ * This class is Hadoop's internal use alternative to Guava's Sets
+ * utility class.
+ * Javadocs for majority of APIs in this class are taken from Guava's Sets
+ * class.
+ */
+@InterfaceAudience.Private
+public final class Sets {

Review comment:
   please indicate a specific Guava release said javadocs are from.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Sets.java
##
@@ -0,0 +1,376 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.Map;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Static utility methods pertaining to {@link Set} instances.
+ * This class is Hadoop's internal use alternative to Guava's Sets
+ * utility class.
+ * Javadocs for majority of APIs in this class are taken from Guava's Sets
+ * class.
+ */
+@InterfaceAudience.Private
+public final class Sets {
+
+  private static final int MAX_POWER_OF_TWO = 1 << (Integer.SIZE - 2);
+
+  private Sets() {
+// empty
+  }
+
+  /**
+   * Creates a mutable, initially empty {@code HashSet} instance.
+   *
+   * Note: if mutability is not required, use ImmutableSet#of()
+   * instead. If {@code E} is an {@link Enum} type, use {@link EnumSet#noneOf}
+   * instead. Otherwise, strongly consider using a {@code LinkedHashSet}
+   * instead, at the cost of increased memory footprint, to get
+   * deterministic iteration behavior.
+   */
+  public static  HashSet newHashSet() {
+return new HashSet();
+  }
+
+  /**
+   * Creates a mutable, empty {@code TreeSet} instance sorted by the
+   * natural sort ordering of its elements.
+   *
+   * Note: if mutability is not required, use ImmutableSortedSet#of()
+   * instead.
+   *
+   * @return a new, empty {@code TreeSet}
+   */
+  public static  TreeSet newTreeSet() {
+return new TreeSet();
+  }
+
+  /**
+   * Creates a mutable {@code HashSet} instance initially containing
+   * the given elements.
+   *
+   * Note: if elements are non-null and won't be added or removed
+   * after this point, use ImmutableSet#of() or ImmutableSet#copyOf(Object[])
+   * instead. If {@code E} is an {@link Enum} type, use
+   * {@link EnumSet#of(Enum, Enum[])} instead. Otherwise, strongly consider
+   * using a {@code LinkedHashSet} instead, at the cost of increased memory
+   * 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3014: HDFS-16026. Restore cross platform mkstemp

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3014:
URL: https://github.com/apache/hadoop/pull/3014#issuecomment-842594230


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  docker  | 203m 16s |  |  Docker failed to build 
yetus/hadoop:8f850b46a0.  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/3014 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3014/14/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17680) Allow ProtobufRpcEngine to be extensible

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17680?focusedWorklogId=598160=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598160
 ]

ASF GitHub Bot logged work on HADOOP-17680:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:39
Start Date: 17/May/21 18:39
Worklog Time Spent: 10m 
  Work Description: jojochuang merged pull request #2999:
URL: https://github.com/apache/hadoop/pull/2999


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598160)
Time Spent: 2h 20m  (was: 2h 10m)

> Allow ProtobufRpcEngine to be extensible
> 
>
> Key: HADOOP-17680
> URL: https://issues.apache.org/jira/browse/HADOOP-17680
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hector Sandoval Chaverri
>Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The ProtobufRpcEngine class doesn't allow for new RpcEngine implementations 
> to extend some of its inner classes (e.g. Invoker and 
> Server.ProtoBufRpcInvoker). Also, some of its methods are long enough such 
> that overriding them would result in a lot of code duplication (e.g. 
> Invoker#invoke and Server.ProtoBufRpcInvoker#call).
> When implementing a new RpcEngine, it would be helpful to reuse most of the 
> code already in ProtobufRpcEngine. This would allow new fields to be added to 
> the RPC header or message with minimal code changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2999: HDFS-15912. Allow ProtobufRpcEngine to be extensible

2021-05-17 Thread GitBox


jojochuang merged pull request #2999:
URL: https://github.com/apache/hadoop/pull/2999


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #3009: HDFS-16024: RBF: Rename data to the Trash should be based on src loca…

2021-05-17 Thread GitBox


ferhui commented on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-841962259






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=598142=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598142
 ]

ASF GitHub Bot logged work on HADOOP-17511:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:37
Start Date: 17/May/21 18:37
Worklog Time Spent: 10m 
  Work Description: bogthe commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r633141325



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/HttpReferrerAuditHeader.java
##
@@ -0,0 +1,500 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store;
+
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.charset.StandardCharsets;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.StringJoiner;
+import java.util.function.Supplier;
+import java.util.stream.Collectors;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.store.audit.CommonAuditContext;
+import org.apache.http.NameValuePair;
+import org.apache.http.client.utils.URLEncodedUtils;
+
+import static java.util.Objects.requireNonNull;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH2;
+import static 
org.apache.hadoop.fs.store.audit.AuditConstants.REFERRER_ORIGIN_HOST;
+
+/**
+ * Contains all the logic for generating an HTTP "Referer"
+ * entry; includes escaping query params.
+ * Tests for this are in
+ * {@code org.apache.hadoop.fs.s3a.audit.TestHttpReferrerAuditHeader}
+ * so as to verify that header generation in the S3A auditors, and
+ * S3 log parsing, all work.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public final class HttpReferrerAuditHeader {
+
+  /**
+   * Format of path to build: {@value}.
+   * the params passed in are (context ID, span ID, op)
+   */
+  public static final String REFERRER_PATH_FORMAT = "/%3$s/%2$s/";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(HttpReferrerAuditHeader.class);
+
+  /**
+   * Log for warning of problems creating headers will only log of
+   * a problem once per process instance.
+   * This is to avoid logs being flooded with errors.
+   */
+  private static final LogExactlyOnce WARN_OF_URL_CREATION =
+  new LogExactlyOnce(LOG);
+
+  /** Context ID. */
+  private final String contextId;
+
+  /** operation name. */
+  private final String operationName;
+
+  /** Span ID. */
+  private final String spanId;
+
+  /** optional first path. */
+  private final String path1;
+
+  /** optional second path. */
+  private final String path2;
+
+  /**
+   * The header as created in the constructor; used in toString().
+   * A new header is built on demand in {@link #buildHttpReferrer()}
+   * so that evaluated attributes are dynamically evaluated
+   * in the correct thread/place.
+   */
+  private final String initialHeader;
+
+  /**
+   * Map of simple attributes.
+   */
+  private final Map attributes;
+
+  /**
+   * Parameters dynamically evaluated on the thread just before
+   * the request is made.
+   */
+  private final Map> evaluated;
+
+  /**
+   * Elements to filter from the final header.
+   */
+  private final Set filter;
+
+  /**
+   * Instantiate.
+   *
+   * Context and operationId are expected to be well formed
+   * numeric/hex strings, at least adequate to be
+   * used as individual path elements in a URL.
+   */
+  private HttpReferrerAuditHeader(
+  final Builder builder) {
+this.contextId = requireNonNull(builder.contextId);
+  

[jira] [Work logged] (HADOOP-17700) ExitUtil#halt info log with incorrect placeholders

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17700?focusedWorklogId=598134=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598134
 ]

ASF GitHub Bot logged work on HADOOP-17700:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:36
Start Date: 17/May/21 18:36
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on pull request #3015:
URL: https://github.com/apache/hadoop/pull/3015#issuecomment-842023031






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598134)
Time Spent: 1h 40m  (was: 1.5h)

> ExitUtil#halt info log with incorrect placeholders
> --
>
> Key: HADOOP-17700
> URL: https://issues.apache.org/jira/browse/HADOOP-17700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> ExitUtil#halt with non-zero exit status code provides info log with incorrect 
> no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #3015: HADOOP-17700. ExitUtil#halt info log should log HaltException

2021-05-17 Thread GitBox


virajjasani commented on pull request #3015:
URL: https://github.com/apache/hadoop/pull/3015#issuecomment-842023031






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bogthe commented on a change in pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

2021-05-17 Thread GitBox


bogthe commented on a change in pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#discussion_r633141325



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/store/HttpReferrerAuditHeader.java
##
@@ -0,0 +1,500 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.store;
+
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.nio.charset.StandardCharsets;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.StringJoiner;
+import java.util.function.Supplier;
+import java.util.stream.Collectors;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.commons.lang3.StringUtils;
+import org.apache.hadoop.classification.InterfaceAudience;
+import org.apache.hadoop.classification.InterfaceStability;
+import org.apache.hadoop.fs.store.audit.CommonAuditContext;
+import org.apache.http.NameValuePair;
+import org.apache.http.client.utils.URLEncodedUtils;
+
+import static java.util.Objects.requireNonNull;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_ID;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_OP;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH;
+import static org.apache.hadoop.fs.store.audit.AuditConstants.PARAM_PATH2;
+import static 
org.apache.hadoop.fs.store.audit.AuditConstants.REFERRER_ORIGIN_HOST;
+
+/**
+ * Contains all the logic for generating an HTTP "Referer"
+ * entry; includes escaping query params.
+ * Tests for this are in
+ * {@code org.apache.hadoop.fs.s3a.audit.TestHttpReferrerAuditHeader}
+ * so as to verify that header generation in the S3A auditors, and
+ * S3 log parsing, all work.
+ */
+@InterfaceAudience.Private
+@InterfaceStability.Unstable
+public final class HttpReferrerAuditHeader {
+
+  /**
+   * Format of path to build: {@value}.
+   * the params passed in are (context ID, span ID, op)
+   */
+  public static final String REFERRER_PATH_FORMAT = "/%3$s/%2$s/";
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(HttpReferrerAuditHeader.class);
+
+  /**
+   * Log for warning of problems creating headers will only log of
+   * a problem once per process instance.
+   * This is to avoid logs being flooded with errors.
+   */
+  private static final LogExactlyOnce WARN_OF_URL_CREATION =
+  new LogExactlyOnce(LOG);
+
+  /** Context ID. */
+  private final String contextId;
+
+  /** operation name. */
+  private final String operationName;
+
+  /** Span ID. */
+  private final String spanId;
+
+  /** optional first path. */
+  private final String path1;
+
+  /** optional second path. */
+  private final String path2;
+
+  /**
+   * The header as created in the constructor; used in toString().
+   * A new header is built on demand in {@link #buildHttpReferrer()}
+   * so that evaluated attributes are dynamically evaluated
+   * in the correct thread/place.
+   */
+  private final String initialHeader;
+
+  /**
+   * Map of simple attributes.
+   */
+  private final Map attributes;
+
+  /**
+   * Parameters dynamically evaluated on the thread just before
+   * the request is made.
+   */
+  private final Map> evaluated;
+
+  /**
+   * Elements to filter from the final header.
+   */
+  private final Set filter;
+
+  /**
+   * Instantiate.
+   *
+   * Context and operationId are expected to be well formed
+   * numeric/hex strings, at least adequate to be
+   * used as individual path elements in a URL.
+   */
+  private HttpReferrerAuditHeader(
+  final Builder builder) {
+this.contextId = requireNonNull(builder.contextId);
+this.evaluated = builder.evaluated;
+this.filter = builder.filter;
+this.operationName = requireNonNull(builder.operationName);
+this.path1 = builder.path1;
+this.path2 = builder.path2;
+this.spanId = requireNonNull(builder.spanId);
+
+// copy the parameters from the builder and extend
+attributes = builder.attributes;
+
+addAttribute(PARAM_OP, operationName);
+addAttribute(PARAM_PATH, 

[GitHub] [hadoop] anoopsjohn commented on pull request #2795: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-05-17 Thread GitBox


anoopsjohn commented on pull request #2795:
URL: https://github.com/apache/hadoop/pull/2795#issuecomment-842063424






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=598127=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598127
 ]

ASF GitHub Bot logged work on HADOOP-17596:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:35
Start Date: 17/May/21 18:35
Worklog Time Spent: 10m 
  Work Description: anoopsjohn commented on pull request #2795:
URL: https://github.com/apache/hadoop/pull/2795#issuecomment-842063424






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598127)
Time Spent: 2h 20m  (was: 2h 10m)

> ABFS: Change default Readahead Queue Depth from num(processors) to const
> 
>
> Key: HADOOP-17596
> URL: https://issues.apache.org/jira/browse/HADOOP-17596
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The default value of readahead queue depth is currently set to the number of 
> available processors. However, this can result in one inputstream instance 
> consuming more processor time. To ensure equal thread allocation during read 
> for all inputstreams created in a session, we change the default readahead 
> queue depth to a constant (2).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] zhuxiangyi commented on pull request #3009: HDFS-16024: RBF: Rename data to the Trash should be based on src loca…

2021-05-17 Thread GitBox


zhuxiangyi commented on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-841979980






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=598104=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598104
 ]

ASF GitHub Bot logged work on HADOOP-17609:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:33
Start Date: 17/May/21 18:33
Worklog Time Spent: 10m 
  Work Description: iwasakims edited a comment on pull request #3019:
URL: https://github.com/apache/hadoop/pull/3019#issuecomment-842245830


   I manually tested this by `hadoop checknative` after `-Pnative -Pdist` build 
on CentOS 8.
   ```
   $ mvn clean install -DskipTests -DskipShade -Pnative -Pdist
   $ hadoop-dist/target/hadoop-3.4.0-SNAPSHOT/bin/hadoop checknative
   2021-05-17 11:19:03,102 INFO bzip2.Bzip2Factory: Successfully loaded & 
initialized native-bzip2 library system-native
   2021-05-17 11:19:03,106 INFO zlib.ZlibFactory: Successfully loaded & 
initialized native-zlib library
   2021-05-17 11:19:03,182 INFO nativeio.NativeIO: The native code was built 
without PMDK support.
   Native library checking:
   hadoop:  true 
/home/centos/srcs/hadoop/hadoop-dist/target/hadoop-3.4.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
   zlib:true /lib64/libz.so.1
   zstd  :  true /lib64/libzstd.so.1
   bzip2:   true /lib64/libbz2.so.1
   openssl: true /lib64/libcrypto.so
   ISA-L:   true /lib64/libisal.so.2
   PMDK:false The native code was built without PMDK support.
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598104)
Time Spent: 3h 40m  (was: 3.5h)

> Make SM4 support optional for OpenSSL native code
> -
>
> Key: HADOOP-17609
> URL: https://issues.apache.org/jira/browse/HADOOP-17609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.4.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 
> because the SM4 is not enabled on the openssl package. We should not force 
> users to install OpenSSL from source code even if they do not use SM4 feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3017: HADOOP-17703. checkcompatibility.py errors out when specifying annotations.

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3017:
URL: https://github.com/apache/hadoop/pull/3017#issuecomment-842212446


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  17m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  pylint  |   0m 12s |  |  The patch generated 0 new + 
174 unchanged - 1 fixed = 174 total (was 175)  |
   | +1 :green_heart: |  shadedclient  |  18m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  52m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3017/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3017 |
   | Optional Tests | dupname asflicense codespell pylint |
   | uname | Linux b08b54f82054 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 43d30a7d62fc187a358d927d7cde792dfda2b2e0 |
   | Max. process+thread count | 592 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3017/1/console |
   | versions | git=2.25.1 maven=3.6.3 pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinaysbadami commented on pull request #2795: HADOOP-17596. ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-05-17 Thread GitBox


vinaysbadami commented on pull request #2795:
URL: https://github.com/apache/hadoop/pull/2795#issuecomment-842000750


   @anoopsjohn  ==> wrt your comment on "will this hamper spark jobs".
   1. This is a speculative read ahead. So whenever this is wrong, it is waste 
io and iops.
   2. With parquet etc, the reads tend to be random. Hence a smaller read ahead 
depth is prefereable
   3. Based on debugging various customer perf isues, we rarely saw benefit of 
> 2
   4. This config is per inputstream and not global across streams.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims edited a comment on pull request #3019: HADOOP-17609. Make SM4 support optional for OpenSSL native code.

2021-05-17 Thread GitBox


iwasakims edited a comment on pull request #3019:
URL: https://github.com/apache/hadoop/pull/3019#issuecomment-842245830


   I manually tested this by `hadoop checknative` after `-Pnative -Pdist` build 
on CentOS 8.
   ```
   $ mvn clean install -DskipTests -DskipShade -Pnative -Pdist
   $ hadoop-dist/target/hadoop-3.4.0-SNAPSHOT/bin/hadoop checknative
   2021-05-17 11:19:03,102 INFO bzip2.Bzip2Factory: Successfully loaded & 
initialized native-bzip2 library system-native
   2021-05-17 11:19:03,106 INFO zlib.ZlibFactory: Successfully loaded & 
initialized native-zlib library
   2021-05-17 11:19:03,182 INFO nativeio.NativeIO: The native code was built 
without PMDK support.
   Native library checking:
   hadoop:  true 
/home/centos/srcs/hadoop/hadoop-dist/target/hadoop-3.4.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
   zlib:true /lib64/libz.so.1
   zstd  :  true /lib64/libzstd.so.1
   bzip2:   true /lib64/libbz2.so.1
   openssl: true /lib64/libcrypto.so
   ISA-L:   true /lib64/libisal.so.2
   PMDK:false The native code was built without PMDK support.
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17596) ABFS: Change default Readahead Queue Depth from num(processors) to const

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17596?focusedWorklogId=598094=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598094
 ]

ASF GitHub Bot logged work on HADOOP-17596:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:31
Start Date: 17/May/21 18:31
Worklog Time Spent: 10m 
  Work Description: vinaysbadami commented on pull request #2795:
URL: https://github.com/apache/hadoop/pull/2795#issuecomment-842000750


   @anoopsjohn  ==> wrt your comment on "will this hamper spark jobs".
   1. This is a speculative read ahead. So whenever this is wrong, it is waste 
io and iops.
   2. With parquet etc, the reads tend to be random. Hence a smaller read ahead 
depth is prefereable
   3. Based on debugging various customer perf isues, we rarely saw benefit of 
> 2
   4. This config is per inputstream and not global across streams.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598094)
Time Spent: 2h 10m  (was: 2h)

> ABFS: Change default Readahead Queue Depth from num(processors) to const
> 
>
> Key: HADOOP-17596
> URL: https://issues.apache.org/jira/browse/HADOOP-17596
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The default value of readahead queue depth is currently set to the number of 
> available processors. However, this can result in one inputstream instance 
> consuming more processor time. To ensure equal thread allocation during read 
> for all inputstreams created in a session, we change the default readahead 
> queue depth to a constant (2).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17703) checkcompatibility.py errors out when specifying annotations

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17703?focusedWorklogId=598095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598095
 ]

ASF GitHub Bot logged work on HADOOP-17703:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:31
Start Date: 17/May/21 18:31
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3017:
URL: https://github.com/apache/hadoop/pull/3017#issuecomment-842212446


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 15s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  shadedclient  |  17m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 24s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  pylint  |   0m 12s |  |  The patch generated 0 new + 
174 unchanged - 1 fixed = 174 total (was 175)  |
   | +1 :green_heart: |  shadedclient  |  18m 27s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  52m 54s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3017/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3017 |
   | Optional Tests | dupname asflicense codespell pylint |
   | uname | Linux b08b54f82054 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 43d30a7d62fc187a358d927d7cde792dfda2b2e0 |
   | Max. process+thread count | 592 (vs. ulimit of 5500) |
   | modules | C:  U:  |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3017/1/console |
   | versions | git=2.25.1 maven=3.6.3 pylint=2.6.0 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598095)
Time Spent: 1h  (was: 50m)

> checkcompatibility.py errors out when specifying annotations
> 
>
> Key: HADOOP-17703
> URL: https://issues.apache.org/jira/browse/HADOOP-17703
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop/blob/trunk/dev-support/bin/checkcompatibility.py#L178]
> {code:java}
>  with file(annotations_path, "w") as f: {code}
> is not a valid Pythonic code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3018: HDFS-16027. Replace abstract methods with default methods in JournalNodeMXBean

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3018:
URL: https://github.com/apache/hadoop/pull/3018#issuecomment-842389544


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 53s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ branch-3.3 Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   0m 46s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  20m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3018/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 1 unchanged - 
0 fixed = 2 total (was 1)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 208m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3018/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 297m  3s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3018/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3018 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 5819f7462518 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / ed82894f7c42f92a1b2c6c73f9a94d2d7ea59781 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~18.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3018/1/testReport/ |
   | Max. process+thread count | 2057 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3018/1/console |
   | versions | git=2.17.1 maven=3.6.0 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2747: HDFS-15877. BlockReconstructionWork should resetTargets() before BlockManager#validateReconstructionWork return false

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #2747:
URL: https://github.com/apache/hadoop/pull/2747#issuecomment-842490494


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 37s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 39s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 51s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 230m 32s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 44s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 327m  7s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2747/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2747 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 01da3e889f77 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 5f18053ba0061442c19be76a08b71190a1cf8225 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2747/3/testReport/ |
   | Max. process+thread count | 3783 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2747/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: 

[GitHub] [hadoop] jojochuang closed pull request #2905: HDFS-15912. Allow ProtobufRpcEngine to be extensible

2021-05-17 Thread GitBox


jojochuang closed pull request #2905:
URL: https://github.com/apache/hadoop/pull/2905


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dongjoon-hyun commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

2021-05-17 Thread GitBox


dongjoon-hyun commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841888674


   Also, cc @sunchao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17680) Allow ProtobufRpcEngine to be extensible

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17680?focusedWorklogId=598059=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598059
 ]

ASF GitHub Bot logged work on HADOOP-17680:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:28
Start Date: 17/May/21 18:28
Worklog Time Spent: 10m 
  Work Description: jojochuang closed pull request #2905:
URL: https://github.com/apache/hadoop/pull/2905


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598059)
Time Spent: 2h 10m  (was: 2h)

> Allow ProtobufRpcEngine to be extensible
> 
>
> Key: HADOOP-17680
> URL: https://issues.apache.org/jira/browse/HADOOP-17680
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hector Sandoval Chaverri
>Assignee: Hector Sandoval Chaverri
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 2.10.2, 3.2.3
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> The ProtobufRpcEngine class doesn't allow for new RpcEngine implementations 
> to extend some of its inner classes (e.g. Invoker and 
> Server.ProtoBufRpcInvoker). Also, some of its methods are long enough such 
> that overriding them would result in a lot of code duplication (e.g. 
> Invoker#invoke and Server.ProtoBufRpcInvoker#call).
> When implementing a new RpcEngine, it would be helpful to reuse most of the 
> code already in ProtobufRpcEngine. This would allow new fields to be added to 
> the RPC header or message with minimal code changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #881: YARN-2774. support secure clusters in shared cache manager

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #881:
URL: https://github.com/apache/hadoop/pull/881#issuecomment-842289293


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  15m 59s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 18s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  18m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 50s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m 10s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   4m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   4m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   8m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  6s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  4s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m  5s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 43s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-881/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 1 new + 238 unchanged - 1 fixed = 239 total (was 
239)  |
   | +1 :green_heart: |  mvnsite  |   5m  5s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   4m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 41s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 16s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m  9s |  |  hadoop-yarn-api in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   4m 59s |  |  hadoop-yarn-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m  0s |  |  hadoop-yarn-server-common in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   0m 57s |  |  
hadoop-yarn-server-sharedcachemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 53s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 233m 41s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-881/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/881 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint compile 
javac javadoc mvninstall unit shadedclient spotbugs checkstyle xml |
   | uname | Linux 14346a7c601a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 8b67a904d429eaf6c7f72a31d6e8f56cc7de961f |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HADOOP-17703) checkcompatibility.py errors out when specifying annotations

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17703?focusedWorklogId=598054=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598054
 ]

ASF GitHub Bot logged work on HADOOP-17703:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:28
Start Date: 17/May/21 18:28
Worklog Time Spent: 10m 
  Work Description: jojochuang opened a new pull request #3017:
URL: https://github.com/apache/hadoop/pull/3017


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598054)
Time Spent: 50m  (was: 40m)

> checkcompatibility.py errors out when specifying annotations
> 
>
> Key: HADOOP-17703
> URL: https://issues.apache.org/jira/browse/HADOOP-17703
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop/blob/trunk/dev-support/bin/checkcompatibility.py#L178]
> {code:java}
>  with file(annotations_path, "w") as f: {code}
> is not a valid Pythonic code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=598047=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598047
 ]

ASF GitHub Bot logged work on HADOOP-17609:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:27
Start Date: 17/May/21 18:27
Worklog Time Spent: 10m 
  Work Description: iwasakims commented on pull request #3019:
URL: https://github.com/apache/hadoop/pull/3019#issuecomment-842245830






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598047)
Time Spent: 3.5h  (was: 3h 20m)

> Make SM4 support optional for OpenSSL native code
> -
>
> Key: HADOOP-17609
> URL: https://issues.apache.org/jira/browse/HADOOP-17609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.4.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 
> because the SM4 is not enabled on the openssl package. We should not force 
> users to install OpenSSL from source code even if they do not use SM4 feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=598045=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598045
 ]

ASF GitHub Bot logged work on HADOOP-17511:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:27
Start Date: 17/May/21 18:27
Worklog Time Spent: 10m 
  Work Description: dongjoon-hyun commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-841888674


   Also, cc @sunchao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598045)
Time Spent: 18.5h  (was: 18h 20m)

> Add an Audit plugin point for S3A auditing/context
> --
>
> Key: HADOOP-17511
> URL: https://issues.apache.org/jira/browse/HADOOP-17511
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 18.5h
>  Remaining Estimate: 0h
>
> Add a way for auditing tools to correlate S3 object calls with Hadoop FS API 
> calls.
> Initially just to log/forward to an auditing service.
> Later: let us attach them as parameters in S3 requests, such as opentrace 
> headeers or (my initial idea: http referrer header -where it will get into 
> the log)
> Challenges
> * ensuring the audit span is created for every public entry point. That will 
> have to include those used in s3guard tools, some defacto public APIs
> * and not re-entered for active spans. s3A code must not call back into the 
> FS API points
> * Propagation across worker threads



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #3019: HADOOP-17609. Make SM4 support optional for OpenSSL native code.

2021-05-17 Thread GitBox


iwasakims commented on pull request #3019:
URL: https://github.com/apache/hadoop/pull/3019#issuecomment-842245830






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17669) Port HADOOP-17079, HADOOP-17505 to branch-3.3

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17669?focusedWorklogId=598035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598035
 ]

ASF GitHub Bot logged work on HADOOP-17669:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:26
Start Date: 17/May/21 18:26
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2959:
URL: https://github.com/apache/hadoop/pull/2959#issuecomment-842438940


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   7m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   7m 24s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |  12m 36s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 37s |  |  the patch passed  |
   | -1 :x: |  javac  |  16m 37s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/results-compile-javac-root.txt)
 |  root generated 84 new + 1952 unchanged - 3 fixed = 2036 total (was 1955)  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 43s |  |  root: The patch generated 
0 new + 904 unchanged - 7 fixed = 904 total (was 911)  |
   | +1 :green_heart: |  mvnsite  |   7m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |  13m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 54s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 188m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m  2s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  15m 52s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  23m 17s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | -1 :x: |  unit  |  90m 31s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   4m 29s |  |  hadoop-mapreduce-client-hs in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 518m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | 
hadoop.yarn.server.resourcemanager.placement.TestUserGroupMappingPlacementRule |
   |   | hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2959 |
   | Optional 

[GitHub] [hadoop] aajisaka merged pull request #2608: YARN-10555. missing access check before getAppAttempts

2021-05-17 Thread GitBox


aajisaka merged pull request #2608:
URL: https://github.com/apache/hadoop/pull/2608


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma closed pull request #2854: HDFS-15945. DataNodes with zero capacity and zero blocks should be decommissioned immediately.

2021-05-17 Thread GitBox


tasanuma closed pull request #2854:
URL: https://github.com/apache/hadoop/pull/2854


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2959: HADOOP-17669. Port HADOOP-17079, HADOOP-17505 to branch-3.3

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #2959:
URL: https://github.com/apache/hadoop/pull/2959#issuecomment-842438940


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 46s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 15 new or modified test files.  |
    _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 41s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |   2m 44s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |   7m 48s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  javadoc  |   7m 24s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  spotbugs  |  12m 36s |  |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  16m 22s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 28s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   5m  9s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  16m 37s |  |  the patch passed  |
   | -1 :x: |  javac  |  16m 37s | 
[/results-compile-javac-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/results-compile-javac-root.txt)
 |  root generated 84 new + 1952 unchanged - 3 fixed = 2036 total (was 1955)  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   2m 43s |  |  root: The patch generated 
0 new + 904 unchanged - 7 fixed = 904 total (was 911)  |
   | +1 :green_heart: |  mvnsite  |   7m 45s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   7m 29s |  |  the patch passed  |
   | +1 :green_heart: |  spotbugs  |  13m 53s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 26s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 54s |  |  hadoop-common in the patch 
passed.  |
   | -1 :x: |  unit  | 188m 38s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |   6m  2s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  15m 52s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  23m 17s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | -1 :x: |  unit  |  90m 31s | 
[/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  unit  |   4m 29s |  |  hadoop-mapreduce-client-hs in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  3s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 518m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.datanode.TestBPOfferService |
   |   | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics |
   |   | hadoop.hdfs.TestStripedFileAppend |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   |   | 
hadoop.yarn.server.resourcemanager.placement.TestUserGroupMappingPlacementRule |
   |   | hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2959/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2959 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 835870b24c46 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | branch-3.3 / c65022c258f200483e228f958b3f3ed4c64513af |
   | Default Java | Private 

[GitHub] [hadoop] haiyang1987 edited a comment on pull request #2747: HDFS-15877. BlockReconstructionWork should resetTargets() before BlockManager#validateReconstructionWork return false

2021-05-17 Thread GitBox


haiyang1987 edited a comment on pull request #2747:
URL: https://github.com/apache/hadoop/pull/2747#issuecomment-842136488






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #2854: HDFS-15945. DataNodes with zero capacity and zero blocks should be decommissioned immediately.

2021-05-17 Thread GitBox


jojochuang commented on pull request #2854:
URL: https://github.com/apache/hadoop/pull/2854#issuecomment-841983198


   Great! Glad to find out.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on pull request #2998: HDFS-16016. BPServiceActor to provide new thread to handle IBR

2021-05-17 Thread GitBox


virajjasani commented on pull request #2998:
URL: https://github.com/apache/hadoop/pull/2998#issuecomment-841848450


   Surprisingly I am not able to repro `TestDecommissioningStatus` and 
`TestDecommissioningStatusWithBackoffMonitor` locally.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17705) S3A to add Config to set AWS region

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17705?focusedWorklogId=597986=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597986
 ]

ASF GitHub Bot logged work on HADOOP-17705:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:20
Start Date: 17/May/21 18:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3020:
URL: https://github.com/apache/hadoop/pull/3020#issuecomment-842465032


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 10s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3020/1/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 58s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  76m 58s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Write to static field 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.conf from instance method 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(URI, 
S3ClientFactory$S3ClientCreationParameters)  At 
DefaultS3ClientFactory.java:from instance method 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(URI, 
S3ClientFactory$S3ClientCreationParameters)  At 
DefaultS3ClientFactory.java:[line 75] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3020/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3020 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 09fd38d5a509 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 98d3d6c12946df6094fa450bc05dd4c28ef2675c |
   | 

[jira] [Work logged] (HADOOP-17705) S3A to add Config to set AWS region

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17705?focusedWorklogId=598002=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-598002
 ]

ASF GitHub Bot logged work on HADOOP-17705:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:22
Start Date: 17/May/21 18:22
Worklog Time Spent: 10m 
  Work Description: mehakmeet opened a new pull request #3020:
URL: https://github.com/apache/hadoop/pull/3020


   Tested by: mvn clean verify -Dparallel-tests -Dscale
   Region: ap-south-1
   
   Test:
   ```
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 537, Failures: 0, Errors: 0, Skipped: 5
   ```
   ```
   [INFO] Results:
   [INFO]
   [WARNING] Tests run: 1430, Failures: 0, Errors: 0, Skipped: 462
   ```
   ```
   [INFO]
   [ERROR] Tests run: 151, Failures: 2, Errors: 1, Skipped: 28
   ```
   Timeout and intermittent failures. 
   
   CC: @steveloughran @mukund-thakur 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 598002)
Time Spent: 40m  (was: 0.5h)

> S3A to add Config to set AWS region
> ---
>
> Key: HADOOP-17705
> URL: https://issues.apache.org/jira/browse/HADOOP-17705
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Mehakmeet Singh
>Assignee: Mehakmeet Singh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, AWS region is either constructed via the endpoint URL, by making 
> an assumption that the 2nd component after delimiter "." is the region in 
> endpoint URL, which doesn't work for private links and sets the default to 
> us-east-1 thus causing authorization issue w.r.t the private link.
> Proposed: An AWS region config which when set can bypass the construction of 
> region from the endpoint URL. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17699?focusedWorklogId=597927=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597927
 ]

ASF GitHub Bot logged work on HADOOP-17699:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:12
Start Date: 17/May/21 18:12
Worklog Time Spent: 10m 
  Work Description: xiaoyuyao opened a new pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597927)
Time Spent: 50m  (was: 40m)

> Remove hardcoded SunX509 usage from SSLFactory
> --
>
> Key: HADOOP-17699
> URL: https://issues.apache.org/jira/browse/HADOOP-17699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and 
> ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which 
> is used to get a KeyManager/TrustManager. This KeyManager type might not be 
> available if using the other JSSE providers, e.g.,  in FIPS deployment.
>  
> {code:java}
> WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized 
> ssl related configuration. Fall
>  back to system-generic settings.
>  java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not 
> available
>  at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
>  at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137)
>  at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186)
>  at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187)
>  at 
> org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79)
> {code}
> This ticket is opened to use the DefaultAlgorithm defined by Java system 
> property: 
> ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=597978=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597978
 ]

ASF GitHub Bot logged work on HADOOP-17511:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:19
Start Date: 17/May/21 18:19
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842340255


   +git showing some log output during a terasort test
   https://gist.github.com/steveloughran/8e0aadb51c63f1c3538deda19ee952ae
   
   some of the events (e.g 
183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4235ef8 ) have job ID 
in the referrer header "ji=job_1620911577786_0006". This is only set during the 
FS operations the S3A committer performs during task and job, as they're the 
only ones we know are explicitly related to a job. If we were confident that 
whichever thread called `Committer.setupTask()` was the only thread making 
FileSystem API calls for that task then we could set it at the task level.
   
   The`org.apache.hadoop.fs.audit.CommonAuditContext` class provides global and 
thread local context maps to let apps attach such attributes; the new 
ManifestCommitter will be setting them so that once ABFS picks up the same 
auditing, the context info will come down.
   
   Modified versions of Hive, Spark etc could use this API to set any of their 
context info when a specific thread was scheduled to work for a given query; 
trying to guess in the hadoop committer isn't the right place
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597978)
Time Spent: 18h 20m  (was: 18h 10m)

> Add an Audit plugin point for S3A auditing/context
> --
>
> Key: HADOOP-17511
> URL: https://issues.apache.org/jira/browse/HADOOP-17511
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 18h 20m
>  Remaining Estimate: 0h
>
> Add a way for auditing tools to correlate S3 object calls with Hadoop FS API 
> calls.
> Initially just to log/forward to an auditing service.
> Later: let us attach them as parameters in S3 requests, such as opentrace 
> headeers or (my initial idea: http referrer header -where it will get into 
> the log)
> Challenges
> * ensuring the audit span is created for every public entry point. That will 
> have to include those used in s3guard tools, some defacto public APIs
> * and not re-entered for active spans. s3A code must not call back into the 
> FS API points
> * Propagation across worker threads



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2496: YARN-10502. Add backlogs metric for CapacityScheduler

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #2496:
URL: https://github.com/apache/hadoop/pull/2496#issuecomment-842339473


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 59s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  37m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 51s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 56s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 52s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 50s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 40s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2496/1/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager:
 The patch generated 1 new + 99 unchanged - 1 fixed = 100 total (was 100)  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 43s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 100m 13s |  |  
hadoop-yarn-server-resourcemanager in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 186m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2496/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2496 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 11abe4a94c7a 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 33ff69434295c3a383e406c494f1dab9a51f0efb |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2496/1/testReport/ |
   | Max. process+thread count | 822 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3020: HADOOP-17705. S3A to add Config to set AWS region

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3020:
URL: https://github.com/apache/hadoop/pull/3020#issuecomment-842465032


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 35s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | -1 :x: |  spotbugs  |   1m 10s | 
[/new-spotbugs-hadoop-tools_hadoop-aws.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3020/1/artifact/out/new-spotbugs-hadoop-tools_hadoop-aws.html)
 |  hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  14m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 58s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  76m 58s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-tools/hadoop-aws |
   |  |  Write to static field 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.conf from instance method 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(URI, 
S3ClientFactory$S3ClientCreationParameters)  At 
DefaultS3ClientFactory.java:from instance method 
org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(URI, 
S3ClientFactory$S3ClientCreationParameters)  At 
DefaultS3ClientFactory.java:[line 75] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3020/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3020 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 09fd38d5a509 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 98d3d6c12946df6094fa450bc05dd4c28ef2675c |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3020/1/testReport/ |
   | Max. process+thread count | 620 (vs. ulimit of 5500) |
   | 

[GitHub] [hadoop] steveloughran commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

2021-05-17 Thread GitBox


steveloughran commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842340255


   +git showing some log output during a terasort test
   https://gist.github.com/steveloughran/8e0aadb51c63f1c3538deda19ee952ae
   
   some of the events (e.g 
183c9826b45486e485693808f38e2c4071004bf5dfd4c3ab210f0a21a4235ef8 ) have job ID 
in the referrer header "ji=job_1620911577786_0006". This is only set during the 
FS operations the S3A committer performs during task and job, as they're the 
only ones we know are explicitly related to a job. If we were confident that 
whichever thread called `Committer.setupTask()` was the only thread making 
FileSystem API calls for that task then we could set it at the task level.
   
   The`org.apache.hadoop.fs.audit.CommonAuditContext` class provides global and 
thread local context maps to let apps attach such attributes; the new 
ManifestCommitter will be setting them so that once ABFS picks up the same 
auditing, the context info will come down.
   
   Modified versions of Hive, Spark etc could use this API to set any of their 
context info when a specific thread was scheduled to work for a given query; 
trying to guess in the hadoop committer isn't the right place
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=597956=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597956
 ]

ASF GitHub Bot logged work on HADOOP-17609:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:16
Start Date: 17/May/21 18:16
Worklog Time Spent: 10m 
  Work Description: iwasakims closed pull request #2847:
URL: https://github.com/apache/hadoop/pull/2847


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597956)
Time Spent: 3h 20m  (was: 3h 10m)

> Make SM4 support optional for OpenSSL native code
> -
>
> Key: HADOOP-17609
> URL: https://issues.apache.org/jira/browse/HADOOP-17609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.4.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 
> because the SM4 is not enabled on the openssl package. We should not force 
> users to install OpenSSL from source code even if they do not use SM4 feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims closed pull request #2847: HADOOP-17609. Make SM4 support optional for OpenSSL native code.

2021-05-17 Thread GitBox


iwasakims closed pull request #2847:
URL: https://github.com/apache/hadoop/pull/2847


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on pull request #2747: HDFS-15877. BlockReconstructionWork should resetTargets() before BlockManager#validateReconstructionWork return false

2021-05-17 Thread GitBox


ferhui commented on pull request #2747:
URL: https://github.com/apache/hadoop/pull/2747#issuecomment-841928897






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2966: HDFS-16004.startLogSegment and journal in BackupNode lack Permission …

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #2966:
URL: https://github.com/apache/hadoop/pull/2966#issuecomment-842256549


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 13s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 53s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  4s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2966/4/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 37 unchanged - 
0 fixed = 38 total (was 37)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m  6s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 230m 22s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2966/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 317m 38s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.qjournal.TestSecureNNWithQJM |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2966/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2966 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 210fdd43a96b 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 219aabeb13577db1dcd5d8ea796f9fa6bcbc0165 |
   | Default Java | Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2966/4/testReport/ |
   | Max. process+thread count | 2985 (vs. ulimit of 5500) |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3005: HDFS-13522. RBF: Support observer node from Router-Based Federation

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3005:
URL: https://github.com/apache/hadoop/pull/3005#issuecomment-842320011


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 12 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  16m 25s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  20m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |  17m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   3m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   5m  6s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   9m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m  7s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |  20m  7s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  javac  |  18m  0s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 50s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 10 new + 905 unchanged - 1 fixed = 915 total (was 
906)  |
   | +1 :green_heart: |  mvnsite  |   5m  7s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   3m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.10+9-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   5m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_282-8u282-b08-0ubuntu1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |  10m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m  4s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 11s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 36s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | -1 :x: |  unit  | 380m 45s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/6/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  25m 38s |  |  hadoop-hdfs-rbf in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  7s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 635m 37s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestViewDistributedFileSystemWithMountLinks |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3005/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3005 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
  

[jira] [Work logged] (HADOOP-17511) Add an Audit plugin point for S3A auditing/context

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17511?focusedWorklogId=597944=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597944
 ]

ASF GitHub Bot logged work on HADOOP-17511:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:14
Start Date: 17/May/21 18:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842351843


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  
https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/24/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597944)
Time Spent: 18h 10m  (was: 18h)

> Add an Audit plugin point for S3A auditing/context
> --
>
> Key: HADOOP-17511
> URL: https://issues.apache.org/jira/browse/HADOOP-17511
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 18h 10m
>  Remaining Estimate: 0h
>
> Add a way for auditing tools to correlate S3 object calls with Hadoop FS API 
> calls.
> Initially just to log/forward to an auditing service.
> Later: let us attach them as parameters in S3 requests, such as opentrace 
> headeers or (my initial idea: http referrer header -where it will get into 
> the log)
> Challenges
> * ensuring the audit span is created for every public entry point. That will 
> have to include those used in s3guard tools, some defacto public APIs
> * and not re-entered for active spans. s3A code must not call back into the 
> FS API points
> * Propagation across worker threads



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17115) Replace Guava Sets usage by Hadoop's own Sets

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17115?focusedWorklogId=597868=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597868
 ]

ASF GitHub Bot logged work on HADOOP-17115:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:05
Start Date: 17/May/21 18:05
Worklog Time Spent: 10m 
  Work Description: virajjasani commented on a change in pull request #2985:
URL: https://github.com/apache/hadoop/pull/2985#discussion_r633057797



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Sets.java
##
@@ -0,0 +1,329 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashSet;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Static utility methods pertaining to {@link Set} instances.
+ * This class is Hadoop's internal use alternative to Guava's Sets
+ * utility class.
+ */
+@InterfaceAudience.Private
+public final class Sets {
+
+  private Sets() {
+// empty
+  }
+
+  /**
+   * Creates a mutable, initially empty {@code HashSet} instance.
+   *
+   * Note: if mutability is not required, use ImmutableSet#of()
+   * instead. If {@code E} is an {@link Enum} type, use {@link EnumSet#noneOf}
+   * instead. Otherwise, strongly consider using a {@code LinkedHashSet}
+   * instead, at the cost of increased memory footprint, to get
+   * deterministic iteration behavior.
+   */
+  public static  HashSet newHashSet() {
+return new HashSet();
+  }
+
+  /**
+   * Creates a mutable, empty {@code TreeSet} instance sorted by the
+   * natural sort ordering of its elements.
+   *
+   * Note: if mutability is not required, use ImmutableSortedSet#of()
+   * instead.
+   *
+   * @return a new, empty {@code TreeSet}
+   */
+  public static  TreeSet newTreeSet() {
+return new TreeSet();
+  }

Review comment:
   I think yes it makes sense but other than hadoop-common and 
hadoop-tools, many modules are dependent on these methods quite heavily. 
Perhaps making them use "new Hash/TreeSet<>()" directly can be taken up as 
follow-up task? As part of removal of Guava dependency, if we just update 
imports from guava to internal Sets class, that would be quite clean. WDYT?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597868)
Time Spent: 7h  (was: 6h 50m)

> Replace Guava Sets usage by Hadoop's own Sets
> -
>
> Key: HADOOP-17115
> URL: https://issues.apache.org/jira/browse/HADOOP-17115
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Unjustified usage of Guava API to initialize a {{HashSet}}. This should be 
> replaced by Java APIs.
> {code:java}
> Targets
> Occurrences of 'Sets.newHashSet' in project
> Found Occurrences  (223 usages found)
> org.apache.hadoop.crypto.key  (2 usages found)
> TestValueQueue.java  (2 usages found)
> testWarmUp()  (2 usages found)
> 106 Assert.assertEquals(Sets.newHashSet("k1", "k2", "k3"),
> 107 Sets.newHashSet(fillInfos[0].key,
> org.apache.hadoop.crypto.key.kms  (6 usages found)
> TestLoadBalancingKMSClientProvider.java  (6 usages found)
> testCreation()  (6 usages found)
> 86 
> 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2807: HADOOP-17511. Add audit/telemetry logging to S3A connector

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #2807:
URL: https://github.com/apache/hadoop/pull/2807#issuecomment-842351843


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m  0s |  |  Docker mode activated.  |
   | -1 :x: |  patch  |   0m 19s |  |  
https://github.com/apache/hadoop/pull/2807 does not apply to trunk. Rebase 
required? Wrong Branch? See 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute for help.  
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | GITHUB PR | https://github.com/apache/hadoop/pull/2807 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2807/24/console |
   | versions | git=2.17.1 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #3017: HADOOP-17703. checkcompatibility.py errors out when specifying annotations.

2021-05-17 Thread GitBox


jojochuang commented on pull request #3017:
URL: https://github.com/apache/hadoop/pull/3017#issuecomment-842178053


   @aajisaka could you help with this trivial patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=597924=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597924
 ]

ASF GitHub Bot logged work on HADOOP-17609:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:12
Start Date: 17/May/21 18:12
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3019:
URL: https://github.com/apache/hadoop/pull/3019#issuecomment-842356693






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597924)
Time Spent: 3h 10m  (was: 3h)

> Make SM4 support optional for OpenSSL native code
> -
>
> Key: HADOOP-17609
> URL: https://issues.apache.org/jira/browse/HADOOP-17609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.4.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 
> because the SM4 is not enabled on the openssl package. We should not force 
> users to install OpenSSL from source code even if they do not use SM4 feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3019: HADOOP-17609. Make SM4 support optional for OpenSSL native code.

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3019:
URL: https://github.com/apache/hadoop/pull/3019#issuecomment-842356693






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17699) Remove hardcoded SunX509 usage from SSLFactory

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17699?focusedWorklogId=597913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597913
 ]

ASF GitHub Bot logged work on HADOOP-17699:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:11
Start Date: 17/May/21 18:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#issuecomment-841871270






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597913)
Time Spent: 40m  (was: 0.5h)

> Remove hardcoded SunX509 usage from SSLFactory
> --
>
> Key: HADOOP-17699
> URL: https://issues.apache.org/jira/browse/HADOOP-17699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> In SSLFactory.SSLCERTIFICATE, used by FileBasedKeyStoresFactory and 
> ReloadingX509TrustManager, there is a hardcoded reference to "SunX509" which 
> is used to get a KeyManager/TrustManager. This KeyManager type might not be 
> available if using the other JSSE providers, e.g.,  in FIPS deployment.
>  
> {code:java}
> WARN org.apache.hadoop.hdfs.web.URLConnectionFactory: Cannot load customized 
> ssl related configuration. Fall
>  back to system-generic settings.
>  java.security.NoSuchAlgorithmException: SunX509 KeyManagerFactory not 
> available
>  at sun.security.jca.GetInstance.getInstance(GetInstance.java:159)
>  at javax.net.ssl.KeyManagerFactory.getInstance(KeyManagerFactory.java:137)
>  at 
> org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:186)
>  at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:187)
>  at 
> org.apache.hadoop.hdfs.web.SSLConnectionConfigurator.(SSLConnectionConfigurator.java:50)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.getSSLConnectionConfiguration(URLConnectionFactory.java:100)
>  at 
> org.apache.hadoop.hdfs.web.URLConnectionFactory.newDefaultURLConnectionFactory(URLConnectionFactory.java:79)
> {code}
> This ticket is opened to use the DefaultAlgorithm defined by Java system 
> property: 
> ssl.KeyManagerFactory.algorithm and ssl.TrustManagerFactory.algorithm.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17700) ExitUtil#halt info log with incorrect placeholders

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17700?focusedWorklogId=597932=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597932
 ]

ASF GitHub Bot logged work on HADOOP-17700:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:13
Start Date: 17/May/21 18:13
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3015:
URL: https://github.com/apache/hadoop/pull/3015#issuecomment-842281065


   +1
   One aspect of the Exit code design was that returning != 0 doesn't mean 
print a stack trace at info. So the exit() calls must only log the message at 
info, full stack at debug for the curious.
   
   halt() is a different story. Looking at the hadoop code there's nowhere 
where halt is called that it isn't some kind of emergency, often where exit() 
raised an exception. Which means the stack is important.
   
   Given it's logging the stack at info, what about cutting the DEBUG logging?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597932)
Time Spent: 1.5h  (was: 1h 20m)

> ExitUtil#halt info log with incorrect placeholders
> --
>
> Key: HADOOP-17700
> URL: https://issues.apache.org/jira/browse/HADOOP-17700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> ExitUtil#halt with non-zero exit status code provides info log with incorrect 
> no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17703) checkcompatibility.py errors out when specifying annotations

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17703?focusedWorklogId=597931=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597931
 ]

ASF GitHub Bot logged work on HADOOP-17703:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:13
Start Date: 17/May/21 18:13
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #3017:
URL: https://github.com/apache/hadoop/pull/3017#issuecomment-842178053


   @aajisaka could you help with this trivial patch?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597931)
Time Spent: 40m  (was: 0.5h)

> checkcompatibility.py errors out when specifying annotations
> 
>
> Key: HADOOP-17703
> URL: https://issues.apache.org/jira/browse/HADOOP-17703
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.3.1, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> [https://github.com/apache/hadoop/blob/trunk/dev-support/bin/checkcompatibility.py#L178]
> {code:java}
>  with file(annotations_path, "w") as f: {code}
> is not a valid Pythonic code.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3015: HADOOP-17700. ExitUtil#halt info log should log HaltException

2021-05-17 Thread GitBox


steveloughran commented on pull request #3015:
URL: https://github.com/apache/hadoop/pull/3015#issuecomment-842281065


   +1
   One aspect of the Exit code design was that returning != 0 doesn't mean 
print a stack trace at info. So the exit() calls must only log the message at 
info, full stack at debug for the curious.
   
   halt() is a different story. Looking at the hadoop code there's nowhere 
where halt is called that it isn't some kind of emergency, often where exit() 
raised an exception. Which means the stack is important.
   
   Given it's logging the stack at info, what about cutting the DEBUG logging?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui edited a comment on pull request #3009: HDFS-16024: RBF: Rename data to the Trash should be based on src loca…

2021-05-17 Thread GitBox


ferhui edited a comment on pull request #3009:
URL: https://github.com/apache/hadoop/pull/3009#issuecomment-842146030


   @zhuxiangyi Thanks for comments!
   > With this solution, as long as there is a mount point for Trash and the 
Trash directory exists, Router can move data to Trash.
   
   I want to know more details.
   In your example, 
   
   > /user/userA Ns1 -> /user/userA
   > /home/userA Ns2 -> /home/userA
   
   If we want to rm  /home/userA/somefile, TrashPolicyDefault will try to mkdir 
 /user/userA/.Trash/Current/home/userA (called baseTrashPath), the 
baseTrashPath will be create on Ns1. If the baseTrashPath does not exist on 
Ns2, the following rename will fail, is it right? when does the baseTrashPath 
create on Ns2?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17609) Make SM4 support optional for OpenSSL native code

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17609?focusedWorklogId=597917=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597917
 ]

ASF GitHub Bot logged work on HADOOP-17609:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:11
Start Date: 17/May/21 18:11
Worklog Time Spent: 10m 
  Work Description: iwasakims opened a new pull request #3019:
URL: https://github.com/apache/hadoop/pull/3019


   https://issues.apache.org/jira/browse/HADOOP-17609
   
   This replaces #2847.
   
   After HDFS-15098, OpensslCipher does not work with OpenSSL >= 1.1.1 without 
SM4 support. RHEL/CentOS 8 provides such openssl package. The OpensslCipher on 
such environment should be usable if users do not need SM4 feature.
   
   ```
   $ rpm -q openssl-devel
   openssl-devel-1.1.1g-12.el8_3.x86_64
   
   $ bin/hadoop checknative 2>/dev/null
   Native library checking:
   hadoop:  true 
/home/centos/dist/hadoop-3.4.0-SNAPSHOT/lib/native/libhadoop.so.1.0.0
   zlib:true /lib64/libz.so.1
   zstd  :  true /lib64/libzstd.so.1
   bzip2:   true /lib64/libbz2.so.1
   openssl: false Cannot find AES-CTR/SM4-CTR support, is your version of 
Openssl new enough?
   ISA-L:   true /lib64/libisal.so.2
   PMDK:false The native code was built without PMDK support.
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597917)
Time Spent: 3h  (was: 2h 50m)

> Make SM4 support optional for OpenSSL native code
> -
>
> Key: HADOOP-17609
> URL: https://issues.apache.org/jira/browse/HADOOP-17609
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Affects Versions: 3.4.0
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> openssl-devel-1.1.1g provided by CentOS 8 does not work after HDFS-15098 
> because the SM4 is not enabled on the openssl package. We should not force 
> users to install OpenSSL from source code even if they do not use SM4 feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3016: HADOOP-17699. Remove hardcoded SunX509 usage from SSLFactory.

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3016:
URL: https://github.com/apache/hadoop/pull/3016#issuecomment-841871270






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani edited a comment on pull request #2998: HDFS-16016. BPServiceActor to provide new thread to handle IBR

2021-05-17 Thread GitBox


virajjasani edited a comment on pull request #2998:
URL: https://github.com/apache/hadoop/pull/2998#issuecomment-841848450


   Surprisingly I am not able to repro `TestDecommissioningStatus` and 
`TestDecommissioningStatusWithBackoffMonitor` test failures locally.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tasanuma commented on pull request #2854: HDFS-15945. DataNodes with zero capacity and zero blocks should be decommissioned immediately.

2021-05-17 Thread GitBox


tasanuma commented on pull request #2854:
URL: https://github.com/apache/hadoop/pull/2854#issuecomment-841912534


   As I said in the last comment, this is not a problem anymore after 
HDFS-15963. I'm closing this PR.
   Thanks for your kind reviews, @virajjasani and @jojochuang.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17700) ExitUtil#halt info log with incorrect placeholders

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17700?focusedWorklogId=597872=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597872
 ]

ASF GitHub Bot logged work on HADOOP-17700:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:06
Start Date: 17/May/21 18:06
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #3015:
URL: https://github.com/apache/hadoop/pull/3015


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597872)
Time Spent: 1h 20m  (was: 1h 10m)

> ExitUtil#halt info log with incorrect placeholders
> --
>
> Key: HADOOP-17700
> URL: https://issues.apache.org/jira/browse/HADOOP-17700
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> ExitUtil#halt with non-zero exit status code provides info log with incorrect 
> no of placeholders. We should log HaltException with the log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13887?focusedWorklogId=597879=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597879
 ]

ASF GitHub Bot logged work on HADOOP-13887:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:07
Start Date: 17/May/21 18:07
Worklog Time Spent: 10m 
  Work Description: mehakmeet commented on pull request #2706:
URL: https://github.com/apache/hadoop/pull/2706#issuecomment-841783066






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597879)
Time Spent: 3h  (was: 2h 50m)

> Encrypt S3A data client-side with AWS SDK (S3-CSE)
> --
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, 
> HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, 
> HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, 
> HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, 
> HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, 
> HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, 
> HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 commented on pull request #2747: HDFS-15877. BlockReconstructionWork should resetTargets() before BlockManager#validateReconstructionWork return false

2021-05-17 Thread GitBox


haiyang1987 commented on pull request #2747:
URL: https://github.com/apache/hadoop/pull/2747#issuecomment-841792013






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #2706: HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK

2021-05-17 Thread GitBox


mehakmeet commented on pull request #2706:
URL: https://github.com/apache/hadoop/pull/2706#issuecomment-841783066






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13887) Encrypt S3A data client-side with AWS SDK (S3-CSE)

2021-05-17 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13887?focusedWorklogId=597866=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-597866
 ]

ASF GitHub Bot logged work on HADOOP-13887:
---

Author: ASF GitHub Bot
Created on: 17/May/21 18:05
Start Date: 17/May/21 18:05
Worklog Time Spent: 10m 
  Work Description: bogthe commented on pull request #2706:
URL: https://github.com/apache/hadoop/pull/2706#issuecomment-841812391


   > Had merge conflicts so had to force push.
   > Tests:
   > 
   > ```
   > [ERROR] Tests run: 1430, Failures: 1, Errors: 34, Skipped: 538
   > ```
   > 
   > Scale:
   > 
   > ```
   > [ERROR] Tests run: 151, Failures: 3, Errors: 21, Skipped: 29
   > ```
   > 
   > Most errors are MultiPart upload related:
   > 
   > ```
   > com.amazonaws.SdkClientException: Invalid part size: part sizes for 
encrypted multipart uploads must be multiples of the cipher block size (16) 
with the exception of the last part.
   > ```
   > 
   > Simply adding 16(Padding length) to multipart upload block size won't 
work. The part sizes need to be a multiple of 16, so it has that restriction 
for CSE. Also, one more thing to note here is that it assumes the last part to 
be an exception, which makes me believe that multipart upload in CSE has to be 
sequential(or can we parallel upload the starting parts and then upload the 
last part?)? So, potentially another constraint while uploading could have 
performance impacts here apart from the HEAD calls being required while 
downloading/listing.
   > @steveloughran
   
   Hi @mehakmeet , regarding multipart uploads. The last part is always an 
exception with regular multi part uploads too! You can do parallel uploads and 
even upload the last part first and it would still work (for regular 
multi-part). My assumption is that for multi part uploads with CSE enabled the 
same functionality holds (except for cipher block size, but the minimum part 
size for regular multi-part is 5MB = 5 * 1024 * 1024 which is still a multiple 
of 16 :D ). 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 597866)
Time Spent: 2h 50m  (was: 2h 40m)

> Encrypt S3A data client-side with AWS SDK (S3-CSE)
> --
>
> Key: HADOOP-13887
> URL: https://issues.apache.org/jira/browse/HADOOP-13887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Jeeyoung Kim
>Assignee: Igor Mazur
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HADOOP-13887-002.patch, HADOOP-13887-007.patch, 
> HADOOP-13887-branch-2-003.patch, HADOOP-13897-branch-2-004.patch, 
> HADOOP-13897-branch-2-005.patch, HADOOP-13897-branch-2-006.patch, 
> HADOOP-13897-branch-2-008.patch, HADOOP-13897-branch-2-009.patch, 
> HADOOP-13897-branch-2-010.patch, HADOOP-13897-branch-2-012.patch, 
> HADOOP-13897-branch-2-014.patch, HADOOP-13897-trunk-011.patch, 
> HADOOP-13897-trunk-013.patch, HADOOP-14171-001.patch, S3-CSE Proposal.pdf
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Expose the client-side encryption option documented in Amazon S3 
> documentation  - 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
> Currently this is not exposed in Hadoop but it is exposed as an option in AWS 
> Java SDK, which Hadoop currently includes. It should be trivial to propagate 
> this as a parameter passed to the S3client used in S3AFileSystem.java



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a change in pull request #2985: HADOOP-17115. Replace Guava Sets usage by Hadoop's own Sets in hadoop-common and hadoop-tools

2021-05-17 Thread GitBox


virajjasani commented on a change in pull request #2985:
URL: https://github.com/apache/hadoop/pull/2985#discussion_r633057797



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Sets.java
##
@@ -0,0 +1,329 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.util;
+
+import org.apache.hadoop.classification.InterfaceAudience;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.EnumSet;
+import java.util.HashSet;
+import java.util.Iterator;
+import java.util.LinkedHashSet;
+import java.util.Set;
+import java.util.SortedSet;
+import java.util.TreeSet;
+import java.util.concurrent.ConcurrentHashMap;
+
+/**
+ * Static utility methods pertaining to {@link Set} instances.
+ * This class is Hadoop's internal use alternative to Guava's Sets
+ * utility class.
+ */
+@InterfaceAudience.Private
+public final class Sets {
+
+  private Sets() {
+// empty
+  }
+
+  /**
+   * Creates a mutable, initially empty {@code HashSet} instance.
+   *
+   * Note: if mutability is not required, use ImmutableSet#of()
+   * instead. If {@code E} is an {@link Enum} type, use {@link EnumSet#noneOf}
+   * instead. Otherwise, strongly consider using a {@code LinkedHashSet}
+   * instead, at the cost of increased memory footprint, to get
+   * deterministic iteration behavior.
+   */
+  public static  HashSet newHashSet() {
+return new HashSet();
+  }
+
+  /**
+   * Creates a mutable, empty {@code TreeSet} instance sorted by the
+   * natural sort ordering of its elements.
+   *
+   * Note: if mutability is not required, use ImmutableSortedSet#of()
+   * instead.
+   *
+   * @return a new, empty {@code TreeSet}
+   */
+  public static  TreeSet newTreeSet() {
+return new TreeSet();
+  }

Review comment:
   I think yes it makes sense but other than hadoop-common and 
hadoop-tools, many modules are dependent on these methods quite heavily. 
Perhaps making them use "new Hash/TreeSet<>()" directly can be taken up as 
follow-up task? As part of removal of Guava dependency, if we just update 
imports from guava to internal Sets class, that would be quite clean. WDYT?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] bogthe commented on pull request #2706: HADOOP-13887. Support S3 client side encryption (S3-CSE) using AWS-SDK

2021-05-17 Thread GitBox


bogthe commented on pull request #2706:
URL: https://github.com/apache/hadoop/pull/2706#issuecomment-841812391


   > Had merge conflicts so had to force push.
   > Tests:
   > 
   > ```
   > [ERROR] Tests run: 1430, Failures: 1, Errors: 34, Skipped: 538
   > ```
   > 
   > Scale:
   > 
   > ```
   > [ERROR] Tests run: 151, Failures: 3, Errors: 21, Skipped: 29
   > ```
   > 
   > Most errors are MultiPart upload related:
   > 
   > ```
   > com.amazonaws.SdkClientException: Invalid part size: part sizes for 
encrypted multipart uploads must be multiples of the cipher block size (16) 
with the exception of the last part.
   > ```
   > 
   > Simply adding 16(Padding length) to multipart upload block size won't 
work. The part sizes need to be a multiple of 16, so it has that restriction 
for CSE. Also, one more thing to note here is that it assumes the last part to 
be an exception, which makes me believe that multipart upload in CSE has to be 
sequential(or can we parallel upload the starting parts and then upload the 
last part?)? So, potentially another constraint while uploading could have 
performance impacts here apart from the HEAD calls being required while 
downloading/listing.
   > @steveloughran
   
   Hi @mehakmeet , regarding multipart uploads. The last part is always an 
exception with regular multi part uploads too! You can do parallel uploads and 
even upload the last part first and it would still work (for regular 
multi-part). My assumption is that for multi part uploads with CSE enabled the 
same functionality holds (except for cipher block size, but the minimum part 
size for regular multi-part is 5MB = 5 * 1024 * 1024 which is still a multiple 
of 16 :D ). 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3014: HDFS-16026. Restore cross platform mkstemp

2021-05-17 Thread GitBox


hadoop-yetus commented on pull request #3014:
URL: https://github.com/apache/hadoop/pull/3014#issuecomment-841786169






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   3   >