[GitHub] [hadoop] hadoop-yetus commented on pull request #2094: HADOOP-16862. [JDK11] Support JavaDoc.

2020-06-23 Thread GitBox


hadoop-yetus commented on pull request #2094:
URL: https://github.com/apache/hadoop/pull/2094#issuecomment-648606050


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   1m 36s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  27m 23s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 16s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  46m 35s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 12s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 12s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 11s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 11s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  18m 19s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 14s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javadoc  |   0m 13s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 13s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  72m  1s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2094/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2094 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml |
   | uname | Linux c455acc53df8 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 84110d850e2 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2094/1/testReport/ |
   | Max. process+thread count | 295 (vs. ulimit of 5500) |
   | modules | C: hadoop-project U: hadoop-project |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2094/1/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on a change in pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.

2020-06-23 Thread GitBox


Hexiaoqiao commented on a change in pull request #2085:
URL: https://github.com/apache/hadoop/pull/2085#discussion_r444638974



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/JniBasedUnixGroupsMapping.java
##
@@ -19,9 +19,9 @@
 package org.apache.hadoop.security;
 
 import java.io.IOException;
-import java.util.Arrays;
-import java.util.List;
+import java.util.*;

Review comment:
   suggest single class import.

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/Groups.java
##
@@ -345,28 +373,28 @@ public long read() {
  * implementation, otherwise is arranges for the cache to be updated later
  */
 @Override
-public ListenableFuture> reload(final String key,
- List oldValue)
+public ListenableFuture> reload(final String key,
+ Set oldValue)
 throws Exception {
   LOG.debug("GroupCacheLoader - reload (async).");
   if (!reloadGroupsInBackground) {
 return super.reload(key, oldValue);
   }
 
   backgroundRefreshQueued.incrementAndGet();
-  ListenableFuture> listenableFuture =
-  executorService.submit(new Callable>() {
+  ListenableFuture> listenableFuture =
+  executorService.submit(new Callable>() {

Review comment:
   replace with lambda statement?

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/NullGroupsMapping.java
##
@@ -31,6 +33,19 @@
   public void cacheGroupsAdd(List groups) {
   }
 
+  /**
+   * Get all various group memberships of a given user.
+   * Returns EMPTY set in case of non-existing user
+   *
+   * @param user User's name
+   * @return set of group memberships of user
+   * @throws IOException
+   */
+  @Override
+  public Set getGroupsSet(String user) throws IOException {
+return null;

Review comment:
   return Collections.emptySet();
   return EMPTY set rather than Null may be more safety?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSPermissionChecker.java
##
@@ -549,7 +549,6 @@ private boolean hasPermission(INodeAttributes inode, 
FsAction access) {
* - Default entries may be present, but they are ignored during enforcement.
*
* @param inode INodeAttributes accessed inode
-   * @param snapshotId int snapshot ID

Review comment:
   It seems that is not related to this changes.

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterUserMappings.java
##
@@ -111,6 +112,11 @@ public void cacheGroupsRefresh() throws IOException {
 @Override
 public void cacheGroupsAdd(List groups) throws IOException {
 }
+
+@Override
+public Set getGroupsSet(String user) throws IOException {
+  return null;

Review comment:
   Do we need keep the same logic with getGroups. Not sure if this method 
will invoke by other unit test.
   ```
   @Override
   public List getGroups(String user) throws IOException {
 LOG.info("Getting groups in MockUnixGroupsMapping");
 String g1 = user + (10 * i + 1);
 String g2 = user + (10 * i + 2);
 List l = new ArrayList(2);
 l.add(g1);
 l.add(g2);
 i++;
 return l;
   }
   ```

##
File path: 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/lib/service/security/DummyGroupMapping.java
##
@@ -47,4 +48,9 @@ public void cacheGroupsRefresh() throws IOException {
   @Override
   public void cacheGroupsAdd(List groups) throws IOException {
   }
+
+  @Override
+  public Set getGroupsSet(String user) throws IOException {
+return null;

Review comment:
   `return Collections.emptySet(); `?
   BTW, it seems that class `DummyGroupMapping` is never used now, do we need 
scrubbed it off? 

##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/AccessControlList.java
##
@@ -20,10 +20,7 @@
 import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
-import java.util.Collection;
-import java.util.HashSet;
-import java.util.LinkedList;
-import java.util.List;
+import java.util.*;

Review comment:
   another one star import.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Assigned] (HADOOP-15761) intermittent failure of TestAbfsClient.validateUserAgent

2020-06-23 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-15761:
-

Assignee: Bilahari T H

> intermittent failure of TestAbfsClient.validateUserAgent
> 
>
> Key: HADOOP-15761
> URL: https://issues.apache.org/jira/browse/HADOOP-15761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
> Environment: test suites run from IntelliJ IDEA
>Reporter: Steve Loughran
>Assignee: Bilahari T H
>Priority: Minor
>
> (seemingly intermittent) failure of the pattern matcher in 
> {{TestAbfsClient.validateUserAgent}}
> {code}
> java.lang.AssertionError: User agent Azure Blob FS/1.0 (JavaJRE 1.8.0_121; 
> MacOSX 10.13.6; openssl-1.0) Partner Service does not match regexp Azure Blob 
> FS\/1.0 \(JavaJRE ([^\)]+) SunJSSE-1.8\) Partner Service
> {code}
> Using a regexp is probably too brittle here: safest just to look for some 
> specific substring.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.

2020-06-23 Thread Ishani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishani updated HADOOP-16966:

Description: 
When the new RestVersion(2019-02-10) is enabled in the backend, enable that in 
the driver along with the documentation for the appendblob.key config values 
which are possible with the new RestVersion.

 Configs:

fs.azure.enable.appendwithflush

fs.azure.appendblob.key

 

  was:
When the new RestVersion(2019-12-12) is enabled in the backend, enable that in 
the driver along with the documentation for the appendWithFlush config and 
appendblob.key config values which are possible with the new RestVersion.

 Configs:

fs.azure.enable.appendwithflush

fs.azure.appendblob.key

 


> ABFS: Enable new Rest Version and add documentation for appendblob and 
> appendWIthFlush config parameters.
> -
>
> Key: HADOOP-16966
> URL: https://issues.apache.org/jira/browse/HADOOP-16966
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Ishani
>Priority: Major
>
> When the new RestVersion(2019-02-10) is enabled in the backend, enable that 
> in the driver along with the documentation for the appendblob.key config 
> values which are possible with the new RestVersion.
>  Configs:
> fs.azure.enable.appendwithflush
> fs.azure.appendblob.key
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15710) ABFS checkException to map 403 to AccessDeniedException

2020-06-23 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-15710:
-

Assignee: Bilahari T H

> ABFS checkException to map 403 to AccessDeniedException
> ---
>
> Key: HADOOP-15710
> URL: https://issues.apache.org/jira/browse/HADOOP-15710
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: HADOOP-15407
>Reporter: Steve Loughran
>Assignee: Bilahari T H
>Priority: Major
>
> when you can't auth to ABFS, you get a 403 exception back. This should be 
> translated into an access denied exception for better clarity/handling



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16281) ABFS: Rename operation, GetFileStatus before rename operation and throw exception on the driver side

2020-06-23 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-16281:
-

Assignee: Bilahari T H  (was: Da Zhou)

> ABFS: Rename operation, GetFileStatus before rename operation and  throw 
> exception on the driver side
> -
>
> Key: HADOOP-16281
> URL: https://issues.apache.org/jira/browse/HADOOP-16281
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2.0
>Reporter: Da Zhou
>Assignee: Bilahari T H
>Priority: Major
>
> ABFS should add the rename with options:
>  [https://github.com/apache/hadoop/pull/743]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15702) ABFS: Increase timeout of ITestAbfsReadWriteAndSeek

2020-06-23 Thread Bilahari T H (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H reassigned HADOOP-15702:
-

Assignee: Bilahari T H  (was: Da Zhou)

> ABFS: Increase timeout of ITestAbfsReadWriteAndSeek
> ---
>
> Key: HADOOP-15702
> URL: https://issues.apache.org/jira/browse/HADOOP-15702
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, test
>Affects Versions: HADOOP-15407
>Reporter: Sean Mackrory
>Assignee: Bilahari T H
>Priority: Major
>
> ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek 
> fails for me all the time. Let's increase the timout limit.
> It also seems to get executed twice...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] aajisaka opened a new pull request #2094: HADOOP-16862. [JDK11] Support JavaDoc.

2020-06-23 Thread GitBox


aajisaka opened a new pull request #2094:
URL: https://github.com/apache/hadoop/pull/2094


   JIRA: https://issues.apache.org/jira/browse/HADOOP-16862
   
   Reference: https://bugs.openjdk.java.net/browse/JDK-8212233
   
   > UPDATE FOR THOSE WHO GOOGLED THIS BUG:
   > If the project uses source/target 8, adding 8 in javadoc 
configuration should make the project buildable on jdk {11, 12, 13}:
   ```
 
   org.apache.maven.plugins
   maven-javadoc-plugin
   
 8
   
...
 
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16862) [JDK11] Support JavaDoc

2020-06-23 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16862:
--

Assignee: Akira Ajisaka

> [JDK11] Support JavaDoc
> ---
>
> Key: HADOOP-16862
> URL: https://issues.apache.org/jira/browse/HADOOP-16862
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> This issue is to run {{mvn javadoc:javadoc}} successfully in Apache Hadoop 
> with Java 11.
> Now there are many errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16862) [JDK11] Support JavaDoc

2020-06-23 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143506#comment-17143506
 ] 

Akira Ajisaka commented on HADOOP-16862:


{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:3.0.1:javadoc (default-cli) on 
project hadoop-annotations: An error has occurred in Javadoc report generation: 
[ERROR] Exit code: 1 - javadoc: warning - You have specified the HTML version 
as HTML 4.01 by using the -html4 option.
[ERROR] The default is currently HTML5 and the support for HTML 4.01 will be 
removed
[ERROR] in a future release. To suppress this warning, please ensure that any 
HTML constructs
[ERROR] in your comments are valid in HTML5, and remove the -html4 option.
[ERROR] javadoc: error - The code being documented uses modules but the 
packages defined in https://docs.oracle.com/javase/8/docs/api/ are in the 
unnamed module.
[ERROR] 
[ERROR] Command line was: 
/Library/Java/JavaVirtualMachines/adoptopenjdk-11.jdk/Contents/Home/bin/javadoc 
@options @packages
[ERROR] 
[ERROR] Refer to the generated Javadoc files in 
'/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-annotations/target/site/apidocs'
 dir.
{noformat}

> [JDK11] Support JavaDoc
> ---
>
> Key: HADOOP-16862
> URL: https://issues.apache.org/jira/browse/HADOOP-16862
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira Ajisaka
>Priority: Major
>
> This issue is to run {{mvn javadoc:javadoc}} successfully in Apache Hadoop 
> with Java 11.
> Now there are many errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17085) javadoc failing in the yetus report with the latest trunk

2020-06-23 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143503#comment-17143503
 ] 

Akira Ajisaka commented on HADOOP-17085:


Thanks [~ishaniahuja] for the report. Now javadoc is failing with JDK11 for 
now: HADOOP-16862

> javadoc failing in the yetus report with the latest trunk
> -
>
> Key: HADOOP-17085
> URL: https://issues.apache.org/jira/browse/HADOOP-17085
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, yetus
>Reporter: Ishani
>Priority: Major
>
> javadoc is failing in the latest yetus report on  trunk. below is a report 
> from an empty PR where it is failing.
>  
>  *-1 overall*
> ||Vote||Subsystem||Runtime||Comment||
> |+0 |reexec|26m 14s|Docker mode activated.|
> | | |_ Prechecks _| |
> |+1 |dupname|0m 0s|No case conflicting files found.|
> |+1 |[@author|https://github.com/author]|0m 0s|The patch does not contain 
> any [@author|https://github.com/author] tags.|
> |-1 ❌|test4tests|0m 0s|The patch doesn't appear to include any new or 
> modified tests. Please justify why no new tests are needed for this patch. 
> Also please list what manual steps were performed to verify this patch.|
> | | |_ trunk Compile Tests _| |
> |+1 |mvninstall|23m 14s|trunk passed|
> |+1 |compile|0m 46s|trunk passed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04|
> |+1 |compile|0m 30s|trunk passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+1 |checkstyle|0m 24s|trunk passed|
> |+1 |mvnsite|0m 35s|trunk passed|
> |+1 |shadedclient|16m 53s|branch has no errors when building and testing our 
> client artifacts.|
> |-1 ❌|javadoc|0m 25s|hadoop-azure in trunk failed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.|
> |+1 |javadoc|0m 23s|trunk passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+0 |spotbugs|0m 53s|Used deprecated FindBugs config; considering switching 
> to SpotBugs.|
> |+1 |findbugs|0m 51s|trunk passed|
> | | |_ Patch Compile Tests _| |
> |+1 |mvninstall|0m 27s|the patch passed|
> |+1 |compile|0m 27s|the patch passed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04|
> |+1 |javac|0m 27s|the patch passed|
> |+1 |compile|0m 22s|the patch passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+1 |javac|0m 22s|the patch passed|
> |+1 |checkstyle|0m 15s|the patch passed|
> |+1 |mvnsite|0m 24s|the patch passed|
> |+1 |whitespace|0m 0s|The patch has no whitespace issues.|
> |+1 |shadedclient|15m 29s|patch has no errors when building and testing our 
> client artifacts.|
> |-1 ❌|javadoc|0m 22s|hadoop-azure in the patch failed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.|
> |+1 |javadoc|0m 20s|the patch passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+1 |findbugs|0m 53s|the patch passed|
> | | |_ Other Tests _| |
> |+1 |unit|1m 19s|hadoop-azure in the patch passed.|
> |+1 |asflicense|0m 28s|The patch does not generate ASF License warnings.|
> | | |92m 45s| |
> ||Subsystem||Report/Notes||
> |Docker|ClientAPI=1.40 ServerAPI=1.40 base: 
> [https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/Dockerfile]|
> |GITHUB PR|[#2091|https://github.com/apache/hadoop/pull/2091]|
> |Optional Tests|dupname asflicense compile javac javadoc mvninstall mvnsite 
> unit shadedclient findbugs checkstyle|
> |uname|Linux ddd84b65f91e 4.15.0-101-generic 
> [#102|https://github.com/apache/hadoop/pull/102]-Ubuntu SMP Mon May 11 
> 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux|
> |Build tool|maven|
> |Personality|personality/hadoop.sh|
> |git revision|trunk / 
> [{{7c02d18}}|https://github.com/apache/hadoop/commit/7c02d1889bbeabc73c95a4c83f0cd204365ff410]|
> |Default Java|Private Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |Multi-JDK 
> versions|/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04
>  /usr/lib/jvm/java-8-openjdk-amd64:Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]|
> |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]|
> |Test 
> Results|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/testReport/]|
> |Max. process+thread count|308 (vs. ulimit of 5500)|
> |modules|C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure|
> |Console 
> output|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/console]|
> |versions|git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1|
> |Powered by|Apache Yetus 0.12.0 
> [https://yetus.apache.org|https://yetus.apache.org/]|
> This 

[GitHub] [hadoop] Hexiaoqiao edited a comment on pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.

2020-06-23 Thread GitBox


Hexiaoqiao edited a comment on pull request #2085:
URL: https://github.com/apache/hadoop/pull/2085#issuecomment-648569625


   Hi @xiaoyuyao , it could need trigger Jenkins manually following this guide, 
https://cwiki.apache.org/confluence/display/HADOOP/GitHub+Integration#GitHubIntegration-HowtorunJenkinsprecommitjobforaPR(committers).
 Not sure if is it the newest practice guide. I try to trigger it, please ref: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-multibranch/view/change-requests/job/PR-2085/2/console



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17085) javadoc failing in the yetus report with the latest trunk

2020-06-23 Thread Ishani (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143497#comment-17143497
 ] 

Ishani commented on HADOOP-17085:
-

[https://github.com/apache/hadoop/pull/2072]

> javadoc failing in the yetus report with the latest trunk
> -
>
> Key: HADOOP-17085
> URL: https://issues.apache.org/jira/browse/HADOOP-17085
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, yetus
>Reporter: Ishani
>Priority: Major
>
> javadoc is failing in the latest yetus report on  trunk. below is a report 
> from an empty PR where it is failing.
>  
>  *-1 overall*
> ||Vote||Subsystem||Runtime||Comment||
> |+0 |reexec|26m 14s|Docker mode activated.|
> | | |_ Prechecks _| |
> |+1 |dupname|0m 0s|No case conflicting files found.|
> |+1 |[@author|https://github.com/author]|0m 0s|The patch does not contain 
> any [@author|https://github.com/author] tags.|
> |-1 ❌|test4tests|0m 0s|The patch doesn't appear to include any new or 
> modified tests. Please justify why no new tests are needed for this patch. 
> Also please list what manual steps were performed to verify this patch.|
> | | |_ trunk Compile Tests _| |
> |+1 |mvninstall|23m 14s|trunk passed|
> |+1 |compile|0m 46s|trunk passed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04|
> |+1 |compile|0m 30s|trunk passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+1 |checkstyle|0m 24s|trunk passed|
> |+1 |mvnsite|0m 35s|trunk passed|
> |+1 |shadedclient|16m 53s|branch has no errors when building and testing our 
> client artifacts.|
> |-1 ❌|javadoc|0m 25s|hadoop-azure in trunk failed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.|
> |+1 |javadoc|0m 23s|trunk passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+0 |spotbugs|0m 53s|Used deprecated FindBugs config; considering switching 
> to SpotBugs.|
> |+1 |findbugs|0m 51s|trunk passed|
> | | |_ Patch Compile Tests _| |
> |+1 |mvninstall|0m 27s|the patch passed|
> |+1 |compile|0m 27s|the patch passed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04|
> |+1 |javac|0m 27s|the patch passed|
> |+1 |compile|0m 22s|the patch passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+1 |javac|0m 22s|the patch passed|
> |+1 |checkstyle|0m 15s|the patch passed|
> |+1 |mvnsite|0m 24s|the patch passed|
> |+1 |whitespace|0m 0s|The patch has no whitespace issues.|
> |+1 |shadedclient|15m 29s|patch has no errors when building and testing our 
> client artifacts.|
> |-1 ❌|javadoc|0m 22s|hadoop-azure in the patch failed with JDK 
> Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.|
> |+1 |javadoc|0m 20s|the patch passed with JDK Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |+1 |findbugs|0m 53s|the patch passed|
> | | |_ Other Tests _| |
> |+1 |unit|1m 19s|hadoop-azure in the patch passed.|
> |+1 |asflicense|0m 28s|The patch does not generate ASF License warnings.|
> | | |92m 45s| |
> ||Subsystem||Report/Notes||
> |Docker|ClientAPI=1.40 ServerAPI=1.40 base: 
> [https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/Dockerfile]|
> |GITHUB PR|[#2091|https://github.com/apache/hadoop/pull/2091]|
> |Optional Tests|dupname asflicense compile javac javadoc mvninstall mvnsite 
> unit shadedclient findbugs checkstyle|
> |uname|Linux ddd84b65f91e 4.15.0-101-generic 
> [#102|https://github.com/apache/hadoop/pull/102]-Ubuntu SMP Mon May 11 
> 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux|
> |Build tool|maven|
> |Personality|personality/hadoop.sh|
> |git revision|trunk / 
> [{{7c02d18}}|https://github.com/apache/hadoop/commit/7c02d1889bbeabc73c95a4c83f0cd204365ff410]|
> |Default Java|Private Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |Multi-JDK 
> versions|/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04
>  /usr/lib/jvm/java-8-openjdk-amd64:Private 
> Build-1.8.0_252-8u252-b09-1~18.04-b09|
> |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]|
> |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]|
> |Test 
> Results|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/testReport/]|
> |Max. process+thread count|308 (vs. ulimit of 5500)|
> |modules|C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure|
> |Console 
> output|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/console]|
> |versions|git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1|
> |Powered by|Apache Yetus 0.12.0 
> [https://yetus.apache.org|https://yetus.apache.org/]|
> This message was automatically generated.



--
This message was 

[GitHub] [hadoop] Hexiaoqiao commented on pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.

2020-06-23 Thread GitBox


Hexiaoqiao commented on pull request #2085:
URL: https://github.com/apache/hadoop/pull/2085#issuecomment-648569625


   Hi @xiaoyuyao , it could need trigger Jenkins manually following this guide, 
https://cwiki.apache.org/confluence/display/HADOOP/GitHub+Integration#GitHubIntegration-HowtorunJenkinsprecommitjobforaPR(committers).
 I try to trigger it, please ref: 
https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-multibranch/view/change-requests/job/PR-2085/2/console



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17085) javadoc failing in the yetus report with the latest trunk

2020-06-23 Thread Ishani (Jira)
Ishani created HADOOP-17085:
---

 Summary: javadoc failing in the yetus report with the latest trunk
 Key: HADOOP-17085
 URL: https://issues.apache.org/jira/browse/HADOOP-17085
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build, yetus
Reporter: Ishani


javadoc is failing in the latest yetus report on  trunk. below is a report from 
an empty PR where it is failing.

 

 *-1 overall*
||Vote||Subsystem||Runtime||Comment||
|+0 |reexec|26m 14s|Docker mode activated.|
| | |_ Prechecks _| |
|+1 |dupname|0m 0s|No case conflicting files found.|
|+1 |[@author|https://github.com/author]|0m 0s|The patch does not contain any 
[@author|https://github.com/author] tags.|
|-1 ❌|test4tests|0m 0s|The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch.|
| | |_ trunk Compile Tests _| |
|+1 |mvninstall|23m 14s|trunk passed|
|+1 |compile|0m 46s|trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04|
|+1 |compile|0m 30s|trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09|
|+1 |checkstyle|0m 24s|trunk passed|
|+1 |mvnsite|0m 35s|trunk passed|
|+1 |shadedclient|16m 53s|branch has no errors when building and testing our 
client artifacts.|
|-1 ❌|javadoc|0m 25s|hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.|
|+1 |javadoc|0m 23s|trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09|
|+0 |spotbugs|0m 53s|Used deprecated FindBugs config; considering switching to 
SpotBugs.|
|+1 |findbugs|0m 51s|trunk passed|
| | |_ Patch Compile Tests _| |
|+1 |mvninstall|0m 27s|the patch passed|
|+1 |compile|0m 27s|the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04|
|+1 |javac|0m 27s|the patch passed|
|+1 |compile|0m 22s|the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09|
|+1 |javac|0m 22s|the patch passed|
|+1 |checkstyle|0m 15s|the patch passed|
|+1 |mvnsite|0m 24s|the patch passed|
|+1 |whitespace|0m 0s|The patch has no whitespace issues.|
|+1 |shadedclient|15m 29s|patch has no errors when building and testing our 
client artifacts.|
|-1 ❌|javadoc|0m 22s|hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.|
|+1 |javadoc|0m 20s|the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09|
|+1 |findbugs|0m 53s|the patch passed|
| | |_ Other Tests _| |
|+1 |unit|1m 19s|hadoop-azure in the patch passed.|
|+1 |asflicense|0m 28s|The patch does not generate ASF License warnings.|
| | |92m 45s| |

||Subsystem||Report/Notes||
|Docker|ClientAPI=1.40 ServerAPI=1.40 base: 
[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/Dockerfile]|
|GITHUB PR|[#2091|https://github.com/apache/hadoop/pull/2091]|
|Optional Tests|dupname asflicense compile javac javadoc mvninstall mvnsite 
unit shadedclient findbugs checkstyle|
|uname|Linux ddd84b65f91e 4.15.0-101-generic 
[#102|https://github.com/apache/hadoop/pull/102]-Ubuntu SMP Mon May 11 10:07:26 
UTC 2020 x86_64 x86_64 x86_64 GNU/Linux|
|Build tool|maven|
|Personality|personality/hadoop.sh|
|git revision|trunk / 
[{{7c02d18}}|https://github.com/apache/hadoop/commit/7c02d1889bbeabc73c95a4c83f0cd204365ff410]|
|Default Java|Private Build-1.8.0_252-8u252-b09-1~18.04-b09|
|Multi-JDK 
versions|/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04
 /usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09|
|javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]|
|javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]|
|Test 
Results|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/testReport/]|
|Max. process+thread count|308 (vs. ulimit of 5500)|
|modules|C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure|
|Console 
output|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/console]|
|versions|git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1|
|Powered by|Apache Yetus 0.12.0 
[https://yetus.apache.org|https://yetus.apache.org/]|

This message was automatically generated.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17083:
---
Attachment: HADOOP-17083-branch-2.10.002.patch

> Update guava to 27.0-jre in hadoop branch-2.10
> --
>
> Key: HADOOP-17083
> URL: https://issues.apache.org/jira/browse/HADOOP-17083
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.10.0
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17083-branch-2.10.001.patch, 
> HADOOP-17083-branch-2.10.002.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].
>  
> The upgrade should not affect the version of java used. branch-2.10 still 
> sticks to JDK7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mehakmeet commented on pull request #2076: Hadoop 16961. ABFS: Adding metrics to AbfsInputStream

2020-06-23 Thread GitBox


mehakmeet commented on pull request #2076:
URL: https://github.com/apache/hadoop/pull/2076#issuecomment-648567145


   Using ```rawConfig.setBoolean(DISABLE_ABFS_CACHE_KEY, true);``` to create 
the AzureBlobFileSystem for the test isn't solving the issue as well. Might be 
some other problem.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu opened a new pull request #2093: HDFS-15416. DataStorage#addStorageLocations() should add more reasonable information verification.

2020-06-23 Thread GitBox


jianghuazhu opened a new pull request #2093:
URL: https://github.com/apache/hadoop/pull/2093


   …ble information verification.
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu closed pull request #2079: HDFS-15416. DataStorage#addStorageLocations() should add more reasonable information verification.

2020-06-23 Thread GitBox


jianghuazhu closed pull request #2079:
URL: https://github.com/apache/hadoop/pull/2079


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143473#comment-17143473
 ] 

Hadoop QA commented on HADOOP-17083:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
49s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.10 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m  
5s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
54s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
0s{color} | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
27s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  6m 
31s{color} | {color:green} branch-2.10 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
33s{color} | {color:green} branch-2.10 passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
14s{color} | {color:green} branch-2.10 passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  1m 
16s{color} | {color:blue} Used deprecated FindBugs config; considering 
switching to SpotBugs. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
23s{color} | {color:blue} branch/hadoop-project no findbugs output file 
(findbugsXml.xml) {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
6s{color} | {color:red} hadoop-common-project/hadoop-common in branch-2.10 has 
14 extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m  
0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-client in branch-2.10 
has 1 extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  2m 
35s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in branch-2.10 has 10 
extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
31s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
branch-2.10 has 1 extant findbugs warnings. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
13s{color} | {color:red} 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core 
in branch-2.10 has 3 extant findbugs warnings. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
18s{color} | {color:green} the patch passed with JDK Oracle 
Corporation-1.7.0_95-b00 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 18s{color} 
| {color:red} root-jdkOracleCorporation-1.7.0_95-b00 with JDK Oracle 
Corporation-1.7.0_95-b00 generated 12 new + 1434 unchanged - 1 fixed = 1446 
total (was 1435) {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
28s{color} | {color:green} the patch passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~16.04-b09 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 28s{color} 
| {color:red} root-jdkPrivateBuild-1.8.0_252-8u252-b09-1~16.04-b09 with JDK 
Private Build-1.8.0_252-8u252-b09-1~16.04-b09 generated 13 new + 1335 unchanged 
- 2 fixed = 1348 total (was 1337) {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Created] (HADOOP-17084) Update Dockerfile_aarch64 to use Bionic

2020-06-23 Thread RuiChen (Jira)
RuiChen created HADOOP-17084:


 Summary: Update Dockerfile_aarch64 to use Bionic
 Key: HADOOP-17084
 URL: https://issues.apache.org/jira/browse/HADOOP-17084
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, test
Reporter: RuiChen


Dockerfile for x86 have been updated to apply Ubuntu Bionic, JDK11 and other 
changes, we should make Dockerfile for aarch64 following these changes, keep 
same behavior.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao opened a new pull request #2092: HDFS-15429. mkdirs should work when parent dir is an internalDir and fallback configured.

2020-06-23 Thread GitBox


umamaheswararao opened a new pull request #2092:
URL: https://github.com/apache/hadoop/pull/2092


   https://issues.apache.org/jira/browse/HDFS-15429



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2083: HADOOP-17077. S3A delegation token binding to support secondary binding list

2020-06-23 Thread GitBox


hadoop-yetus commented on pull request #2083:
URL: https://github.com/apache/hadoop/pull/2083#issuecomment-648421840


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
2 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 56s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 42s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 59s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 59s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 16s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-aws: The 
patch generated 8 new + 18 unchanged - 2 fixed = 26 total (was 20)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 56s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 28s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   1m  5s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 23s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m  7s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Unread field:SecondaryDelegationToken.java:[line 157] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2083 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 91b036b8a814 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 03f855e3e7a |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/3/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/3/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 

[jira] [Updated] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17083:
---
Attachment: HADOOP-17083-branch-2.10.001.patch

> Update guava to 27.0-jre in hadoop branch-2.10
> --
>
> Key: HADOOP-17083
> URL: https://issues.apache.org/jira/browse/HADOOP-17083
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.10.0
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17083-branch-2.10.001.patch
>
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].
>  
> The upgrade should not affect the version of java used. branch-2.10 still 
> sticks to JDK7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmed Hussein updated HADOOP-17083:
---
Status: Patch Available  (was: In Progress)

> Update guava to 27.0-jre in hadoop branch-2.10
> --
>
> Key: HADOOP-17083
> URL: https://issues.apache.org/jira/browse/HADOOP-17083
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.10.0
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].
>  
> The upgrade should not affect the version of java used. branch-2.10 still 
> sticks to JDK7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Ahmed Hussein (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17083 started by Ahmed Hussein.
--
> Update guava to 27.0-jre in hadoop branch-2.10
> --
>
> Key: HADOOP-17083
> URL: https://issues.apache.org/jira/browse/HADOOP-17083
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, security
>Affects Versions: 2.10.0
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>
> com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
> [CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].
>  
> The upgrade should not affect the version of java used. branch-2.10 still 
> sticks to JDK7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #2047: HDFS-15383. RBF: Add support for router delegation token without watch

2020-06-23 Thread GitBox


goiri merged pull request #2047:
URL: https://github.com/apache/hadoop/pull/2047


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-23 Thread GitBox


hadoop-yetus commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-648391250


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 26s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
11 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 16s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 28s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  9s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 49s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 46s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 23s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 56s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 21s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 14s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  63m 26s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2072 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux d281e7b19a65 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 03f855e3e7a |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/10/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/10/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/10/testReport/ |
   | Max. process+thread count | 313 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2072/10/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the 

[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-23 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r47047



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java
##
@@ -59,6 +59,9 @@
   public static final String FS_AZURE_ENABLE_AUTOTHROTTLING = 
"fs.azure.enable.autothrottling";
   public static final String FS_AZURE_ALWAYS_USE_HTTPS = 
"fs.azure.always.use.https";
   public static final String FS_AZURE_ATOMIC_RENAME_KEY = 
"fs.azure.atomic.rename.key";
+  /** Provides a config to provide comma separated path prefixes on which 
Appendblob based files are created
+   *  Default is empty. **/
+  public static final String FS_AZURE_APPEND_BLOB_KEY = 
"fs.azure.appendblob.key";

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoyuyao commented on pull request #2085: HADOOP-17079. Optimize UGI#getGroups by adding UGI#getGroupsSet.

2020-06-23 Thread GitBox


xiaoyuyao commented on pull request #2085:
URL: https://github.com/apache/hadoop/pull/2085#issuecomment-648362938


   @jojochuang do I miss anything to trigger a Jenkins run for this PR? I also 
add a link to the patch in the JIRA based on the "Hadoop  How to Contribute 
document on PR". It does not seem to work either.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-23 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r47198



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java
##
@@ -323,6 +328,35 @@ private synchronized void writeCurrentBufferToService() 
throws IOException {
 final long offset = position;
 position += bytesLength;
 
+if (this.isAppendBlob) {

Review comment:
   done





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on a change in pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-23 Thread GitBox


ishaniahuja commented on a change in pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#discussion_r46862



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java
##
@@ -1314,7 +1352,30 @@ private String 
convertXmsPropertiesToCommaSeparatedString(final Hashtable dirSet) {
+
+for (String dir : dirSet) {
+  if (dir.isEmpty() || key.startsWith(dir)) {
+return true;
+  }
+
+  try {
+URI uri = new URI(dir);
+if (null == uri.getAuthority()) {

Review comment:
   code is removed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja commented on pull request #2072: HADOOP-17058. ABFS: Support for AppendBlob in Hadoop ABFS Driver

2020-06-23 Thread GitBox


ishaniahuja commented on pull request #2072:
URL: https://github.com/apache/hadoop/pull/2072#issuecomment-648360045


   ns, canary, rest-version:2020-02-10, appendblob
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 42
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   
   
   
   ns, canary, rest-version:2020-02-10
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 42
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   
   
   ns, canary, original rest-version:2018-11-09
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 42
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   
   
   
   nns, public endpoint, original rest version(2018-11-09)
   Tests run: 84, Failures: 0, Errors: 0, Skipped: 0
   Tests run: 443, Failures: 0, Errors: 0, Skipped: 245
   Tests run: 207, Failures: 0, Errors: 0, Skipped: 24
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anoopsjohn commented on a change in pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…

2020-06-23 Thread GitBox


anoopsjohn commented on a change in pull request #2073:
URL: https://github.com/apache/hadoop/pull/2073#discussion_r444367323



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SyncableDataOutputStream.java
##
@@ -22,12 +22,13 @@
 import java.io.IOException;
 import java.io.OutputStream;
 
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.fs.StreamCapabilities;
-import org.apache.hadoop.fs.Syncable;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.hadoop.fs.StreamCapabilities;
+import org.apache.hadoop.fs.Syncable;
+import org.apache.hadoop.classification.InterfaceAudience;

Review comment:
   You mean move to L27 so that org.slf4j and org.apche are single block?
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17079) Optimize UGI#getGroups by adding UGI#getGroupsSet

2020-06-23 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143104#comment-17143104
 ] 

Xiaoyu Yao commented on HADOOP-17079:
-

https://github.com/apache/hadoop/pull/2085.patch

> Optimize UGI#getGroups by adding UGI#getGroupsSet
> -
>
> Key: HADOOP-17079
> URL: https://issues.apache.org/jira/browse/HADOOP-17079
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Major
>
> UGI#getGroups has been optimized with HADOOP-13442 by avoiding the 
> List->Set->List conversion. However the returned list is not optimized to 
> contains lookup, especially the user's group membership list is huge 
> (thousands+) . This ticket is opened to add a UGI#getGroupsSet and use 
> Set#contains() instead of List#contains() to speed up large group look up 
> while minimize List->Set conversions in Groups#getGroups() call. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17083) Update guava to 27.0-jre in hadoop branch-2.10

2020-06-23 Thread Ahmed Hussein (Jira)
Ahmed Hussein created HADOOP-17083:
--

 Summary: Update guava to 27.0-jre in hadoop branch-2.10
 Key: HADOOP-17083
 URL: https://issues.apache.org/jira/browse/HADOOP-17083
 Project: Hadoop Common
  Issue Type: Bug
  Components: common, security
Affects Versions: 2.10.0
Reporter: Ahmed Hussein
Assignee: Ahmed Hussein


com.google.guava:guava should be upgraded to 27.0-jre due to new CVE's found 
[CVE-2018-10237|https://nvd.nist.gov/vuln/detail/CVE-2018-10237].

 

The upgrade should not affect the version of java used. branch-2.10 still 
sticks to JDK7



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2020-06-23 Thread Sean Busbey (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143036#comment-17143036
 ] 

Sean Busbey commented on HADOOP-16219:
--

So long as we maintain jdk7 compatibility I think it's fine.

It's still going to break a bunch of downstream folks so we need to release 
note it.

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16219) [JDK8] Set minimum version of Hadoop 2 to JDK 8

2020-06-23 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143034#comment-17143034
 ] 

Ahmed Hussein commented on HADOOP-16219:


Thanks [~busbey] !
Do you have any concerns regarding upgrading Guava on branch-2.10 as well? I 
will open a discussion on common-dev regarding Guava/branch-2.10 but I liked to 
get your thoughts about it.

> [JDK8] Set minimum version of Hadoop 2 to JDK 8
> ---
>
> Key: HADOOP-16219
> URL: https://issues.apache.org/jira/browse/HADOOP-16219
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.10.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16219-branch-2-001.patch
>
>
> Java 7 is long EOL; having branch-2 require it is simply making the release 
> process a pain (we aren't building, testing, or releasing on java 7 JVMs any 
> more, are we?). 
> Staying on java 7 complicates backporting, JAR updates for CVEs (hello 
> Guava!)  are becoming impossible.
> Proposed: increment javac.version = 1.8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] piotte13 closed pull request #2048: HADOOP-17061. Fix broken links in AWS documentation.

2020-06-23 Thread GitBox


piotte13 closed pull request #2048:
URL: https://github.com/apache/hadoop/pull/2048


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2073: HADOOP-16998 WASB : NativeAzureFsOutputStream#close() throwing java.l…

2020-06-23 Thread GitBox


steveloughran commented on a change in pull request #2073:
URL: https://github.com/apache/hadoop/pull/2073#discussion_r444224945



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/SyncableDataOutputStream.java
##
@@ -22,12 +22,13 @@
 import java.io.IOException;
 import java.io.OutputStream;
 
-import org.apache.hadoop.classification.InterfaceAudience;
-import org.apache.hadoop.fs.StreamCapabilities;
-import org.apache.hadoop.fs.Syncable;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import org.apache.hadoop.fs.StreamCapabilities;
+import org.apache.hadoop.fs.Syncable;
+import org.apache.hadoop.classification.InterfaceAudience;

Review comment:
   we are getting so close here. Now move this up to L28.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2091: test PR

2020-06-23 Thread GitBox


hadoop-yetus commented on pull request #2091:
URL: https://github.com/apache/hadoop/pull/2091#issuecomment-648149464


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  26m 25s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   1m  2s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 18s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  18m  2s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   2m 49s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 44s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 33s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 57s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 27s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 23s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  20m 23s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 50s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 50s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 47s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m  8s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 27s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 34s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  findbugs  |   0m 27s |  hadoop-project has no data from 
findbugs  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 25s |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 24s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 180m 41s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2091 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 91a7f6a2b5d6 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 
11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 03f855e3e7a |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/2/testReport/ |
   | Max. process+thread count | 327 (vs. ulimit of 5500) 

[GitHub] [hadoop] steveloughran commented on pull request #1679: HDFS-13934. Multipart uploaders to be created through FileSystem/FileContext.

2020-06-23 Thread GitBox


steveloughran commented on pull request #1679:
URL: https://github.com/apache/hadoop/pull/1679#issuecomment-648147896


   and some minor checkstyles
   ```
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploader.java:103:
  CompletableFuture abortUploadsUnderPath(Path path) throws 
IOException;: Line is longer than 80 characters (found 81). [LineLength]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/MultipartUploaderBuilder.java:31:public
 interface MultipartUploaderBuilder>: Line is longer than 80 characters (found 112). 
[LineLength]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/AbstractMultipartUploader.java:106:
   * {@link MultipartUploader#putPart(UploadHandle, int, Path, InputStream, 
long)}: Line is longer than 80 characters (found 82). [LineLength]
   
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FileSystemMultipartUploaderBuilder.java:35:
MultipartUploaderBuilderImpl {: Line is longer than 80 characters (found 
99). [LineLength]
   
./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractMultipartUploaderTest.java:100:
  abortUploadQuietly(activeUpload, activeUploadPath);: 'if' child has 
incorrect indentation level 10, expected level should be 8. [Indentation]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/S3AMultipartUploaderBuilder.java:34:
MultipartUploaderBuilderImpl {: Line is longer than 80 characters (found 85). 
[LineLength]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StoreContextBuilder.java:73:
  public StoreContextBuilder setFsURI(final URI fsURI) {:49: 'fsURI' hides a 
field. [HiddenField]
   
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/statistics/S3AMultipartUploaderStatisticsImpl.java:1:/*:
 Missing package-info.java file. [JavadocPackage]
   ```
   
   the javadoc one is fixed in the iostats s3 side patch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #1679: HDFS-13934. Multipart uploaders to be created through FileSystem/FileContext.

2020-06-23 Thread GitBox


steveloughran commented on pull request #1679:
URL: https://github.com/apache/hadoop/pull/1679#issuecomment-648147293


   clearly unrelated failure
   
   ```
   [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
2.782 s <<< FAILURE! - in org.apache.hadoop.security.TestRaceWhenRelogin
   [ERROR] test(org.apache.hadoop.security.TestRaceWhenRelogin)  Time elapsed: 
2.595 s  <<< FAILURE!
   java.lang.AssertionError: tgt is not the first ticket after relogin
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.security.TestRaceWhenRelogin.test(TestRaceWhenRelogin.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2083: HADOOP-17077. S3A delegation token binding to support secondary binding list

2020-06-23 Thread GitBox


hadoop-yetus removed a comment on pull request #2083:
URL: https://github.com/apache/hadoop/pull/2083#issuecomment-646222842


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 32s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  18m 44s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 50s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 19s |  hadoop-tools/hadoop-aws: The 
patch generated 5 new + 8 unchanged - 0 fixed = 13 total (was 8)  |
   | +1 :green_heart: |  mvnsite  |   0m 31s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 38s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed  |
   | -1 :x: |  findbugs  |   1m  4s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 20s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  57m 21s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Unread field:SecondaryDTBinding.java:[line 55] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2083 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b397fac21284 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / d50e93ce7b6 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/1/testReport/ |
   | Max. process+thread count | 437 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] Hexiaoqiao commented on pull request #2047: HDFS-15383. RBF: Add support for router delegation token without watch

2020-06-23 Thread GitBox


Hexiaoqiao commented on pull request #2047:
URL: https://github.com/apache/hadoop/pull/2047#issuecomment-648125033


   @fengnanli @goiri I agree that we can push this improvement ahead and file 
another JIRA for other modules.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2088: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS.

2020-06-23 Thread GitBox


hadoop-yetus commented on pull request #2088:
URL: https://github.com/apache/hadoop/pull/2088#issuecomment-648115094


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m  3s |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m  7s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |  21m 40s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   3m 26s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m  2s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  26m 36s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-common in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 42s |  hadoop-hdfs in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   4m 16s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   7m 20s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 59s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |  19m 59s |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 23s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |  17m 23s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 47s |  root: The patch generated 1 new 
+ 104 unchanged - 1 fixed = 105 total (was 105)  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  1s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 49s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 40s |  hadoop-common in the patch failed with 
JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | -1 :x: |  javadoc  |   0m 46s |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   1m 46s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   5m 52s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 25s |  hadoop-common in the patch passed. 
 |
   | -1 :x: |  unit  |  91m 14s |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 289m 27s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestReconstructStripedFile |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRetryCacheMetrics |
   |   | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate |
   |   | hadoop.hdfs.TestStripedFileAppend |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2088/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2088 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 4c23838b0bc2 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 201d734af39 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2083: HADOOP-17077. S3A delegation token binding to support secondary binding list

2020-06-23 Thread GitBox


hadoop-yetus commented on pull request #2083:
URL: https://github.com/apache/hadoop/pull/2083#issuecomment-648073271


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 34s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m 51s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 31s |  hadoop-aws in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   1m  5s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  trunk passed  |
   | -0 :warning: |  patch  |   1m 24s |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 35s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 28s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 18s |  hadoop-tools/hadoop-aws: The 
patch generated 6 new + 8 unchanged - 1 fixed = 14 total (was 9)  |
   | +1 :green_heart: |  mvnsite  |   0m 32s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  13m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 26s |  hadoop-aws in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | -1 :x: |  findbugs  |   1m  3s |  hadoop-tools/hadoop-aws generated 1 new 
+ 0 unchanged - 0 fixed = 1 total (was 0)  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 21s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  61m 50s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-aws |
   |  |  Unread field:SecondaryDelegationToken.java:[line 122] |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2083 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 6511d2c11786 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / fa14e4bc001 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/2/artifact/out/branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/2/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2083/2/artifact/out/patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | findbugs | 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2038: HADOOP-17022 Tune S3AFileSystem.listFiles() api.

2020-06-23 Thread GitBox


hadoop-yetus removed a comment on pull request #2038:
URL: https://github.com/apache/hadoop/pull/2038#issuecomment-635287256


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  27m  4s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  The patch appears to include 
1 new or modified test files.  |
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  21m 43s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  trunk passed  |
   | +1 :green_heart: |  checkstyle  |   0m 23s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 20s |  branch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  trunk passed  |
   | +0 :ok: |  spotbugs  |   1m  1s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 33s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  the patch passed  |
   | +1 :green_heart: |  javac  |   0m 26s |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 17s |  hadoop-tools/hadoop-aws: The 
patch generated 1 new + 10 unchanged - 1 fixed = 11 total (was 11)  |
   | +1 :green_heart: |  mvnsite  |   0m 30s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  9s |  patch has no errors when 
building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  findbugs  |   1m  3s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 15s |  hadoop-aws in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 27s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  89m 24s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2038 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3c7840c09bee 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 9b38be43c63 |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/1/testReport/ |
   | Max. process+thread count | 428 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2038/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2091: test PR

2020-06-23 Thread GitBox


hadoop-yetus commented on pull request #2091:
URL: https://github.com/apache/hadoop/pull/2091#issuecomment-648037455


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |  26m 14s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
   ||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  23m 14s |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 46s |  trunk passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 53s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s |  hadoop-azure in trunk failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  trunk passed with JDK Private 
Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +0 :ok: |  spotbugs  |   0m 53s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 51s |  trunk passed  |
   ||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  the patch passed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  javac  |   0m 22s |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 24s |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 29s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 22s |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  the patch passed with JDK 
Private Build-1.8.0_252-8u252-b09-1~18.04-b09  |
   | +1 :green_heart: |  findbugs  |   0m 53s |  the patch passed  |
   ||| _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 19s |  hadoop-azure in the patch passed.  
|
   | +1 :green_heart: |  asflicense  |   0m 28s |  The patch does not generate 
ASF License warnings.  |
   |  |   |  92m 45s |   |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2091 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ddd84b65f91e 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 
10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 7c02d1889bb |
   | Default Java | Private Build-1.8.0_252-8u252-b09-1~18.04-b09 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09 
|
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/testReport/ |
   | Max. process+thread count | 308 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



[jira] [Commented] (HADOOP-17068) client fails forever when namenode ipaddr changed

2020-06-23 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17142781#comment-17142781
 ] 

Hudson commented on HADOOP-17068:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18375 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/18375/])
HADOOP-17068. Client fails forever when namenode ipaddr changed. (hexiaoqiao: 
rev fa14e4bc001e28d9912e8d985d09bab75aedb87c)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java


> client fails forever when namenode ipaddr changed
> -
>
> Key: HADOOP-17068
> URL: https://issues.apache.org/jira/browse/HADOOP-17068
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17068.001.patch, HDFS-15390.01.patch
>
>
> For machine replacement, I replace my standby namenode with a new ipaddr and 
> keep the same hostname. Also update the client's hosts to make it resolve 
> correctly
> When I try to run failover to transite the new namenode(let's say nn2), the 
> client will fail to read or write forever until it's restarted.
> That make yarn nodemanager in sick state. Even the new tasks will encounter 
> this exception  too. Until all nodemanager restart.
>  
> {code:java}
> 20/06/02 15:12:25 WARN ipc.Client: Address change detected. Old: 
> nn2-192-168-1-100/192.168.1.100:9000 New: nn2-192-168-1-100/192.168.1.200:9000
> 20/06/02 15:12:25 DEBUG ipc.Client: closing ipc connection to 
> nn2-192-168-1-100/192.168.1.200:9000: Connection refused
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:608)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1517)
> at org.apache.hadoop.ipc.Client.call(Client.java:1440)
> at org.apache.hadoop.ipc.Client.call(Client.java:1401)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:193)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> {code}
>  
> We can see the client has {{Address change detected}}, but it still fails. I 
> find out that's because when method {{updateAddress()}} return true,  the 
> {{handleConnectionFailure()}} thow an exception that break the next retry 
> with the right ipaddr.
> Client.java: setupConnection()
> {code:java}
> } catch (ConnectTimeoutException toe) {
>   /* Check for an address change and update the local reference.
>* Reset the failure counter if the address was changed
>*/
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
>   handleConnectionTimeout(timeoutFailures++,
>   maxRetriesOnSocketTimeouts, toe);
> } catch (IOException ie) {
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
> // because the namenode ip changed in updateAddress(), the old namenode 
> ipaddress cannot be accessed now
> // handleConnectionFailure will thow an exception, the next retry never have 
> a chance to use the right server updated in updateAddress()
>   handleConnectionFailure(ioFailures++, ie);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17068) client fails forever when namenode ipaddr changed

2020-06-23 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HADOOP-17068:
-
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

commit to trunk.
Thanks [~seanlook] for your contributions!
Thanks [~ayushtkn] for your reviews!

> client fails forever when namenode ipaddr changed
> -
>
> Key: HADOOP-17068
> URL: https://issues.apache.org/jira/browse/HADOOP-17068
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: hdfs-client
>Reporter: Sean Chow
>Assignee: Sean Chow
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-17068.001.patch, HDFS-15390.01.patch
>
>
> For machine replacement, I replace my standby namenode with a new ipaddr and 
> keep the same hostname. Also update the client's hosts to make it resolve 
> correctly
> When I try to run failover to transite the new namenode(let's say nn2), the 
> client will fail to read or write forever until it's restarted.
> That make yarn nodemanager in sick state. Even the new tasks will encounter 
> this exception  too. Until all nodemanager restart.
>  
> {code:java}
> 20/06/02 15:12:25 WARN ipc.Client: Address change detected. Old: 
> nn2-192-168-1-100/192.168.1.100:9000 New: nn2-192-168-1-100/192.168.1.200:9000
> 20/06/02 15:12:25 DEBUG ipc.Client: closing ipc connection to 
> nn2-192-168-1-100/192.168.1.200:9000: Connection refused
> java.net.ConnectException: Connection refused
> at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
> at 
> sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
> at 
> org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:608)
> at 
> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
> at 
> org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:368)
> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1517)
> at org.apache.hadoop.ipc.Client.call(Client.java:1440)
> at org.apache.hadoop.ipc.Client.call(Client.java:1401)
> at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
> at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
> at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
> at sun.reflect.GeneratedMethodAccessor8.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:193)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> {code}
>  
> We can see the client has {{Address change detected}}, but it still fails. I 
> find out that's because when method {{updateAddress()}} return true,  the 
> {{handleConnectionFailure()}} thow an exception that break the next retry 
> with the right ipaddr.
> Client.java: setupConnection()
> {code:java}
> } catch (ConnectTimeoutException toe) {
>   /* Check for an address change and update the local reference.
>* Reset the failure counter if the address was changed
>*/
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
>   handleConnectionTimeout(timeoutFailures++,
>   maxRetriesOnSocketTimeouts, toe);
> } catch (IOException ie) {
>   if (updateAddress()) {
> timeoutFailures = ioFailures = 0;
>   }
> // because the namenode ip changed in updateAddress(), the old namenode 
> ipaddress cannot be accessed now
> // handleConnectionFailure will thow an exception, the next retry never have 
> a chance to use the right server updated in updateAddress()
>   handleConnectionFailure(ioFailures++, ie);
> }
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] cjn082030 commented on pull request #1808: MAPREDUCE-7258. HistoryServerRest.html#Task_Counters_API, modify the jobTaskCounters's itemName from taskcounterGroup to taskCounterGroup

2020-06-23 Thread GitBox


cjn082030 commented on pull request #1808:
URL: https://github.com/apache/hadoop/pull/1808#issuecomment-648001885


   Hi @aajisaka ,
   I found some minor problems with the mapreduce documentation, I want to 
modify them one by one.
   Regarding the modification, I have two ideas:
   1. Create a pr to modify all document issues
   2. Create a pr and modify only one document
   Which idea do you recommend?
   
   Thank you



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao merged pull request #2088: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS.

2020-06-23 Thread GitBox


umamaheswararao merged pull request #2088:
URL: https://github.com/apache/hadoop/pull/2088


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja opened a new pull request #2091: test PR

2020-06-23 Thread GitBox


ishaniahuja opened a new pull request #2091:
URL: https://github.com/apache/hadoop/pull/2091


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ishaniahuja closed pull request #2087: test PR, for some sample runs. - appendblob

2020-06-23 Thread GitBox


ishaniahuja closed pull request #2087:
URL: https://github.com/apache/hadoop/pull/2087


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2088: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS.

2020-06-23 Thread GitBox


umamaheswararao commented on a change in pull request #2088:
URL: https://github.com/apache/hadoop/pull/2088#discussion_r444024934



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -359,4 +361,249 @@ public void 
testListingWithFallbackLinkWithSameMountDirectories()
   assertTrue(vfs.getFileStatus(childDir).isDirectory());
 }
   }
+
+  /**
+   * Tests ListStatus on non-link parent with fallback configured.
+   * 
=Example.==
+   * = Fallback path tree === Mount Path Tree 
==
+   * 
===
+   * * /*   /  
*
+   * */ *  /   
*
+   * *  user1   *  user1   
*
+   * *   /  *  /   
*
+   * * hive *hive  
*
+   * *   /  \   *   /  
*
+   * * warehousewarehouse1  *  warehouse   
*
+   * * (-rwxr--r--) * (-r-xr--r--) 
*
+   * * /*/ 
*
+   * * partition-0  * partition-0  
*
+   * 
===
+   * 
===
+   * *** ls /user1/hive
*
+   * *** viewfs://default/user1/hive/warehouse (-rwxr--r--)
*
+   * *** viewfs://default/user1/hive/warehouse1
*
+   * 
===
+   */
+  @Test
+  public void testListingWithFallbackLinkWithSameMountDirectoryTree()
+  throws Exception {
+Configuration conf = new Configuration();
+conf.setBoolean(Constants.CONFIG_VIEWFS_MOUNT_LINKS_AS_SYMLINKS, false);
+ConfigUtil.addLink(conf, "/user1/hive/warehouse/partition-0",
+new Path(targetTestRoot.toString()).toUri());
+// Creating multiple directories path under the fallback directory.
+// "/user1/hive/warehouse/partition-0" directory already exists as
+// configured mount point.
+Path dir1 = new Path(targetTestRoot,
+"fallbackDir/user1/hive/warehouse/partition-0");
+Path dir2 = new Path(targetTestRoot, "fallbackDir/user1/hive/warehouse1");
+fsTarget.mkdirs(dir1);
+fsTarget.mkdirs(dir2);
+fsTarget.setPermission(new Path(targetTestRoot, "fallbackDir/user1/hive/"),
+FsPermission.valueOf("-rwxr--r--"));
+URI viewFsUri = new URI(FsConstants.VIEWFS_SCHEME,
+Constants.CONFIG_VIEWFS_DEFAULT_MOUNT_TABLE, "/", null, null);
+
+HashSet beforeFallback = new HashSet<>();
+try (FileSystem vfs = FileSystem.get(viewFsUri, conf)) {
+  for (FileStatus stat : vfs
+  .listStatus(new Path(viewFsUri.toString(), "/user1/hive/"))) {
+beforeFallback.add(stat.getPath());
+  }
+}
+ConfigUtil
+.addLinkFallback(conf, new Path(targetTestRoot, 
"fallbackDir").toUri());
+
+try (FileSystem vfs = FileSystem.get(viewFsUri, conf)) {
+  HashSet afterFallback = new HashSet<>();
+  for (FileStatus stat : vfs
+  .listStatus(new Path(viewFsUri.toString(), "/user1/hive/"))) {
+afterFallback.add(stat.getPath());
+if (dir1.getName().equals(stat.getPath().getName())) {
+  // make sure fallback dir listed out with correct permissions, but 
not
+  // with link permissions.
+  assertEquals(FsPermission.valueOf("-rwxr--r--"),
+  stat.getPermission());
+}
+  }
+  //
+  //viewfs://default/user1/hive/warehouse
+  afterFallback.removeAll(beforeFallback);
+  assertTrue("The same directory name in fallback link should be shaded",

Review comment:
   done.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] umamaheswararao commented on a change in pull request #2088: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS.

2020-06-23 Thread GitBox


umamaheswararao commented on a change in pull request #2088:
URL: https://github.com/apache/hadoop/pull/2088#discussion_r444024794



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -1258,63 +1261,72 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
 FileStatus status =
 ((ChRootedFileSystem)link.getTargetFileSystem())
 .getMyFs().getFileStatus(new Path(linkedPath));
-result[i++] = new FileStatus(status.getLen(), status.isDirectory(),
-status.getReplication(), status.getBlockSize(),
-status.getModificationTime(), status.getAccessTime(),
-status.getPermission(), status.getOwner(), status.getGroup(),
-null, path);
+linkStatuses.add(
+new FileStatus(status.getLen(), status.isDirectory(),
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(),
+status.getGroup(), null, path));
   } catch (FileNotFoundException ex) {
 LOG.warn("Cannot get one of the children's(" + path
 + ")  target path(" + link.getTargetFileSystem().getUri()
 + ") file status.", ex);
 throw ex;
   }
 } else {
-  result[i++] =
+  internalDirStatuses.add(
   new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(),
-  ugi.getPrimaryGroupName(), path);
+  ugi.getPrimaryGroupName(), path));
 }
   }
+  FileStatus[] internalDirStatusesMergedWithFallBack = internalDirStatuses
+  .toArray(new FileStatus[internalDirStatuses.size()]);
   if (fallbackStatuses.length > 0) {
-return consolidateFileStatuses(fallbackStatuses, result);
-  } else {
-return result;
+internalDirStatusesMergedWithFallBack =
+merge(fallbackStatuses, internalDirStatusesMergedWithFallBack);
   }
+  // Links will always have precedence than internalDir or fallback paths.
+  return merge(linkStatuses.toArray(new FileStatus[linkStatuses.size()]),
+  internalDirStatusesMergedWithFallBack);
 }
 
-private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
-FileStatus[] mountPointStatuses) {
+private FileStatus[] merge(FileStatus[] toStatuses,
+FileStatus[] fromStatuses) {
   ArrayList result = new ArrayList<>();
   Set pathSet = new HashSet<>();
-  for (FileStatus status : mountPointStatuses) {
+  for (FileStatus status : toStatuses) {
 result.add(status);
 pathSet.add(status.getPath().getName());
   }
-  for (FileStatus status : fallbackStatuses) {
+  for (FileStatus status : fromStatuses) {
 if (!pathSet.contains(status.getPath().getName())) {
   result.add(status);
 }
   }
-  return result.toArray(new FileStatus[0]);
+  return result.toArray(new FileStatus[result.size()]);
 }
 
 private FileStatus[] listStatusForFallbackLink() throws IOException {
-  if (theInternalDir.isRoot() &&
-  theInternalDir.getFallbackLink() != null) {
-FileSystem linkedFs =
-theInternalDir.getFallbackLink().getTargetFileSystem();
-// Fallback link is only applicable for root
-FileStatus[] statuses = linkedFs.listStatus(new Path("/"));
-for (FileStatus status : statuses) {
-  // Fix the path back to viewfs scheme
-  status.setPath(
-  new Path(myUri.toString(), status.getPath().getName()));
+  if (this.fsState.getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path p = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+if (theInternalDir.isRoot() || linkedFallbackFs.exists(p)) {
+  // Fallback link is only applicable for root

Review comment:
   Thanks updated java doc as well.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2088: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS.

2020-06-23 Thread GitBox


ayushtkn commented on a change in pull request #2088:
URL: https://github.com/apache/hadoop/pull/2088#discussion_r443999571



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -1258,63 +1261,72 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
 FileStatus status =
 ((ChRootedFileSystem)link.getTargetFileSystem())
 .getMyFs().getFileStatus(new Path(linkedPath));
-result[i++] = new FileStatus(status.getLen(), status.isDirectory(),
-status.getReplication(), status.getBlockSize(),
-status.getModificationTime(), status.getAccessTime(),
-status.getPermission(), status.getOwner(), status.getGroup(),
-null, path);
+linkStatuses.add(
+new FileStatus(status.getLen(), status.isDirectory(),
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(),
+status.getGroup(), null, path));
   } catch (FileNotFoundException ex) {
 LOG.warn("Cannot get one of the children's(" + path
 + ")  target path(" + link.getTargetFileSystem().getUri()
 + ") file status.", ex);
 throw ex;
   }
 } else {
-  result[i++] =
+  internalDirStatuses.add(
   new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(),
-  ugi.getPrimaryGroupName(), path);
+  ugi.getPrimaryGroupName(), path));
 }
   }
+  FileStatus[] internalDirStatusesMergedWithFallBack = internalDirStatuses
+  .toArray(new FileStatus[internalDirStatuses.size()]);
   if (fallbackStatuses.length > 0) {
-return consolidateFileStatuses(fallbackStatuses, result);
-  } else {
-return result;
+internalDirStatusesMergedWithFallBack =
+merge(fallbackStatuses, internalDirStatusesMergedWithFallBack);
   }
+  // Links will always have precedence than internalDir or fallback paths.
+  return merge(linkStatuses.toArray(new FileStatus[linkStatuses.size()]),
+  internalDirStatusesMergedWithFallBack);
 }
 
-private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
-FileStatus[] mountPointStatuses) {
+private FileStatus[] merge(FileStatus[] toStatuses,
+FileStatus[] fromStatuses) {
   ArrayList result = new ArrayList<>();
   Set pathSet = new HashSet<>();
-  for (FileStatus status : mountPointStatuses) {
+  for (FileStatus status : toStatuses) {
 result.add(status);
 pathSet.add(status.getPath().getName());
   }
-  for (FileStatus status : fallbackStatuses) {
+  for (FileStatus status : fromStatuses) {
 if (!pathSet.contains(status.getPath().getName())) {
   result.add(status);
 }
   }
-  return result.toArray(new FileStatus[0]);
+  return result.toArray(new FileStatus[result.size()]);
 }
 
 private FileStatus[] listStatusForFallbackLink() throws IOException {
-  if (theInternalDir.isRoot() &&
-  theInternalDir.getFallbackLink() != null) {
-FileSystem linkedFs =
-theInternalDir.getFallbackLink().getTargetFileSystem();
-// Fallback link is only applicable for root
-FileStatus[] statuses = linkedFs.listStatus(new Path("/"));
-for (FileStatus status : statuses) {
-  // Fix the path back to viewfs scheme
-  status.setPath(
-  new Path(myUri.toString(), status.getPath().getName()));
+  if (this.fsState.getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path p = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+if (theInternalDir.isRoot() || linkedFallbackFs.exists(p)) {
+  // Fallback link is only applicable for root

Review comment:
   This comment line can be removed now? Now it isn't just root?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2088: HDFS-15427. Merged ListStatus with Fallback target filesystem and InternalDirViewFS.

2020-06-23 Thread GitBox


ayushtkn commented on a change in pull request #2088:
URL: https://github.com/apache/hadoop/pull/2088#discussion_r443999571



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java
##
@@ -1258,63 +1261,72 @@ public FileStatus getFileStatus(Path f) throws 
IOException {
 FileStatus status =
 ((ChRootedFileSystem)link.getTargetFileSystem())
 .getMyFs().getFileStatus(new Path(linkedPath));
-result[i++] = new FileStatus(status.getLen(), status.isDirectory(),
-status.getReplication(), status.getBlockSize(),
-status.getModificationTime(), status.getAccessTime(),
-status.getPermission(), status.getOwner(), status.getGroup(),
-null, path);
+linkStatuses.add(
+new FileStatus(status.getLen(), status.isDirectory(),
+status.getReplication(), status.getBlockSize(),
+status.getModificationTime(), status.getAccessTime(),
+status.getPermission(), status.getOwner(),
+status.getGroup(), null, path));
   } catch (FileNotFoundException ex) {
 LOG.warn("Cannot get one of the children's(" + path
 + ")  target path(" + link.getTargetFileSystem().getUri()
 + ") file status.", ex);
 throw ex;
   }
 } else {
-  result[i++] =
+  internalDirStatuses.add(
   new FileStatus(0, true, 0, 0, creationTime, creationTime,
   PERMISSION_555, ugi.getShortUserName(),
-  ugi.getPrimaryGroupName(), path);
+  ugi.getPrimaryGroupName(), path));
 }
   }
+  FileStatus[] internalDirStatusesMergedWithFallBack = internalDirStatuses
+  .toArray(new FileStatus[internalDirStatuses.size()]);
   if (fallbackStatuses.length > 0) {
-return consolidateFileStatuses(fallbackStatuses, result);
-  } else {
-return result;
+internalDirStatusesMergedWithFallBack =
+merge(fallbackStatuses, internalDirStatusesMergedWithFallBack);
   }
+  // Links will always have precedence than internalDir or fallback paths.
+  return merge(linkStatuses.toArray(new FileStatus[linkStatuses.size()]),
+  internalDirStatusesMergedWithFallBack);
 }
 
-private FileStatus[] consolidateFileStatuses(FileStatus[] fallbackStatuses,
-FileStatus[] mountPointStatuses) {
+private FileStatus[] merge(FileStatus[] toStatuses,
+FileStatus[] fromStatuses) {
   ArrayList result = new ArrayList<>();
   Set pathSet = new HashSet<>();
-  for (FileStatus status : mountPointStatuses) {
+  for (FileStatus status : toStatuses) {
 result.add(status);
 pathSet.add(status.getPath().getName());
   }
-  for (FileStatus status : fallbackStatuses) {
+  for (FileStatus status : fromStatuses) {
 if (!pathSet.contains(status.getPath().getName())) {
   result.add(status);
 }
   }
-  return result.toArray(new FileStatus[0]);
+  return result.toArray(new FileStatus[result.size()]);
 }
 
 private FileStatus[] listStatusForFallbackLink() throws IOException {
-  if (theInternalDir.isRoot() &&
-  theInternalDir.getFallbackLink() != null) {
-FileSystem linkedFs =
-theInternalDir.getFallbackLink().getTargetFileSystem();
-// Fallback link is only applicable for root
-FileStatus[] statuses = linkedFs.listStatus(new Path("/"));
-for (FileStatus status : statuses) {
-  // Fix the path back to viewfs scheme
-  status.setPath(
-  new Path(myUri.toString(), status.getPath().getName()));
+  if (this.fsState.getRootFallbackLink() != null) {
+FileSystem linkedFallbackFs =
+this.fsState.getRootFallbackLink().getTargetFileSystem();
+Path p = Path.getPathWithoutSchemeAndAuthority(
+new Path(theInternalDir.fullPath));
+if (theInternalDir.isRoot() || linkedFallbackFs.exists(p)) {
+  // Fallback link is only applicable for root

Review comment:
   This comment line can be removed now? No it isn't just root

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/fs/viewfs/TestViewFileSystemLinkFallback.java
##
@@ -359,4 +361,249 @@ public void 
testListingWithFallbackLinkWithSameMountDirectories()
   assertTrue(vfs.getFileStatus(childDir).isDirectory());
 }
   }
+
+  /**
+   * Tests ListStatus on non-link parent with fallback configured.
+   * 
=Example.==
+   * = Fallback path tree === Mount Path Tree 
==
+   * 
===
+   * *