[GitHub] [hadoop] hadoop-yetus commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705330740


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 18s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 36s |  |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  23m 56s | 
[/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 8 new + 155 unchanged - 
8 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  23m 56s |  |  the patch passed  |
   | -1 :x: |  javac  |  23m 56s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2051 unchanged - 
1 fixed = 2053 total (was 2052)  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  21m 13s | 
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 13 new + 150 
unchanged - 13 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  21m 13s |  |  the patch passed  |
   | -1 :x: |  javac  |  21m 13s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 1947 
unchanged - 0 fixed = 1948 total (was 1947)  |
   | -0 :warning: |  checkstyle  |   3m 26s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 132 unchanged - 1 fixed = 134 total (was 
133)  |
   | +1 :green_heart: |  mvnsite  |   3m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 35s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  4s |  |  The patch 

[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=497091=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-497091
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 08/Oct/20 05:02
Start Date: 08/Oct/20 05:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705330740


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 6 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 18s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  25m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  21m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 25s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 44s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 36s |  |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 33s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 47s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  cc  |  23m 56s | 
[/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-cc-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 8 new + 155 unchanged - 
8 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  23m 56s |  |  the patch passed  |
   | -1 :x: |  javac  |  23m 56s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 2 new + 2051 unchanged - 
1 fixed = 2053 total (was 2052)  |
   | +1 :green_heart: |  compile  |  21m 13s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  cc  |  21m 13s | 
[/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-cc-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 13 new + 150 
unchanged - 13 fixed = 163 total (was 163)  |
   | +1 :green_heart: |  golang  |  21m 13s |  |  the patch passed  |
   | -1 :x: |  javac  |  21m 13s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 1 new + 1947 
unchanged - 0 fixed = 1948 total (was 1947)  |
   | -0 :warning: |  checkstyle  |   3m 26s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2350/8/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 132 unchanged - 1 fixed = 134 

[GitHub] [hadoop] ferhui commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


ferhui commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501443141



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
##
@@ -519,6 +525,20 @@ private Object invokeMethod(
 }
   }
 
+  /**
+   * For Tracking which is the actual client address.
+   * It adds key/value (clientIp/"ip") pair to the caller context.
+   */
+  private void appendClientIpToCallerContext() {
+final CallerContext ctx = CallerContext.getCurrent();
+String origContext = ctx == null ? null : ctx.getContext();
+byte[] origSignature = ctx == null ? null : ctx.getSignature();
+CallerContext.setCurrent(
+new CallerContext.Builder(origContext, clientConfiguration)

Review comment:
   OK, fixed, please review again, thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-13230) S3A to optionally retain directory markers

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13230?focusedWorklogId=497076=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-497076
 ]

ASF GitHub Bot logged work on HADOOP-13230:
---

Author: ASF GitHub Bot
Created on: 08/Oct/20 04:33
Start Date: 08/Oct/20 04:33
Worklog Time Spent: 10m 
  Work Description: liuml07 commented on pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#issuecomment-705322840


   Thanks Steve. Looking forward to a Hadoop 2 backport!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 497076)
Time Spent: 20m  (was: 10m)

> S3A to optionally retain directory markers
> --
>
> Key: HADOOP-13230
> URL: https://issues.apache.org/jira/browse/HADOOP-13230
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Aaron Fabbri
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
> Attachments: 2020-02-Fixing the S3A directory marker problem.pdf
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Users of s3a may not realize that, in some cases, it does not interoperate 
> well with other s3 tools, such as the AWS CLI.  (See HIVE-13778, IMPALA-3558).
> Specifically, if a user:
> - Creates an empty directory with hadoop fs -mkdir s3a://bucket/path
> - Copies data into that directory via another tool, i.e. aws cli.
> - Tries to access the data in that directory with any Hadoop software.
> Then the last step fails because the fake empty directory blob that s3a wrote 
> in the first step, causes s3a (listStatus() etc.) to continue to treat that 
> directory as empty, even though the second step was supposed to populate the 
> directory with data.
> I wanted to document this fact for users. We may mark this as not-fix, "by 
> design".. May also be interesting to brainstorm solutions and/or a config 
> option to change the behavior if folks care.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] liuml07 commented on pull request #2149: HADOOP-13230. S3A to optionally retain directory markers

2020-10-07 Thread GitBox


liuml07 commented on pull request #2149:
URL: https://github.com/apache/hadoop/pull/2149#issuecomment-705322840


   Thanks Steve. Looking forward to a Hadoop 2 backport!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2367: HADOOP-17298. Backslash in username causes build failure in the environment started by start-build-env.sh.

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2367:
URL: https://github.com/apache/hadoop/pull/2367#issuecomment-705310794


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  25m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  17m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 21s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 44s |  |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2367/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2367 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 62107dd2619a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / df4006eb813 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2367/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2367/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17298) Backslash in username causes build failure in the environment started by start-build-env.sh.

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17298?focusedWorklogId=497059=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-497059
 ]

ASF GitHub Bot logged work on HADOOP-17298:
---

Author: ASF GitHub Bot
Created on: 08/Oct/20 03:41
Start Date: 08/Oct/20 03:41
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2367:
URL: https://github.com/apache/hadoop/pull/2367#issuecomment-705310794


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  5s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  25m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 12s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  19m  2s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  17m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  There were no new 
shellcheck issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 21s |  |  There were no new 
shelldocs issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 44s |  |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m  5s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 147m 58s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2367/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2367 |
   | Optional Tests | dupname asflicense mvnsite unit shellcheck shelldocs |
   | uname | Linux 62107dd2619a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / df4006eb813 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2367/1/testReport/ |
   | Max. process+thread count | 413 (vs. ulimit of 5500) |
   | modules | C: . U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2367/1/console |
   | versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 497059)
Time Spent: 20m  (was: 10m)

> Backslash in username causes build failure in the environment started by 
> start-build-env.sh.
> 
>
> Key: HADOOP-17298
> URL: https://issues.apache.org/jira/browse/HADOOP-17298
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Takeru Kuramoto
>Assignee: Takeru Kuramoto
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> If a username includes a backslash, `mvn clean install` fails in an 
> environment started by start-build-env.sh.
> Here is my result in Amazon WorkSpaces.
>  
> {code:java}
> CORPbtkuramototkr@b8e750b1e386:/home/CORP\btkuramototkr/hadoop/hadoop-build-to
> ols$ mvn clean install
> /usr/bin/mvn: 1: cd: can't cd to 
> /home/CORtkuramototkr/hadoop/hadoop-build-tools/..
> [INFO] Scanning for projects...
> [INFO] 
> [INFO] < org.apache.hadoop:hadoop-build-tools 
> >
> [INFO] 

[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=497056=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-497056
 ]

ASF GitHub Bot logged work on HADOOP-17301:
---

Author: ASF GitHub Bot
Created on: 08/Oct/20 03:11
Start Date: 08/Oct/20 03:11
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2369:
URL: https://github.com/apache/hadoop/pull/2369#issuecomment-705303355


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  29m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 0 unchanged - 0 
fixed = 4 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/artifact/out/whitespace-tabs.txt)
 |  The patch 3 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  17m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 11s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 34s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 117m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a8cdbf86d0f6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / df4006eb813 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2369:
URL: https://github.com/apache/hadoop/pull/2369#issuecomment-705303355


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  29m 10s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m 31s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  20m 26s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m  6s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 20s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 4 new + 0 unchanged - 0 
fixed = 4 total (was 0)  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-tabs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/artifact/out/whitespace-tabs.txt)
 |  The patch 3 line(s) with tabs.  |
   | +1 :green_heart: |  shadedclient  |  17m 31s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 11s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 34s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 117m 13s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2369 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux a8cdbf86d0f6 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / df4006eb813 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/testReport/ |
   | Max. process+thread count | 452 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2369/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically 

[GitHub] [hadoop] aajisaka commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


aajisaka commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501420480



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
##
@@ -519,6 +525,20 @@ private Object invokeMethod(
 }
   }
 
+  /**
+   * For Tracking which is the actual client address.
+   * It adds key/value (clientIp/"ip") pair to the caller context.
+   */
+  private void appendClientIpToCallerContext() {
+final CallerContext ctx = CallerContext.getCurrent();
+String origContext = ctx == null ? null : ctx.getContext();
+byte[] origSignature = ctx == null ? null : ctx.getSignature();
+CallerContext.setCurrent(
+new CallerContext.Builder(origContext, clientConfiguration)

Review comment:
   Can we pass the string separator instead of configuration to avoid 
unnecessary `Configuration.get()` for each RPC?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2368:
URL: https://github.com/apache/hadoop/pull/2368#issuecomment-705294313


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  0s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 59s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  findbugs  |   1m  5s | 
[/new-findbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2368/1/artifact/out/new-findbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 4 new + 0 unchanged - 0 fixed = 4 total 
(was 0)  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 32s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 31s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  80m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | FindBugs | module:hadoop-tools/hadoop-azure |
   |  |  Write to static field 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readAheadBlockSize from 
instance method new 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream(AbfsClient, 
FileSystem$Statistics, String, long, AbfsInputStreamContext, String)  At 
AbfsInputStream.java:from instance method new 
org.apache.hadoop.fs.azurebfs.services.AbfsInputStream(AbfsClient, 
FileSystem$Statistics, String, long, AbfsInputStreamContext, String)  At 
AbfsInputStream.java:[line 99] |
   |  |  Write to static field 
org.apache.hadoop.fs.azurebfs.services.ReadBufferManager.blockSize from 
instance method 
org.apache.hadoop.fs.azurebfs.services.ReadBufferManager.testResetReadBufferManager(int,
 int)  At ReadBufferManager.java:from instance method 
org.apache.hadoop.fs.azurebfs.services.ReadBufferManager.testResetReadBufferManager(int,
 int)  At ReadBufferManager.java:[line 513] |
   |  |  Write to static field 
org.apache.hadoop.fs.azurebfs.services.ReadBufferManager.thresholdAgeMilliseconds
 from instance method 
org.apache.hadoop.fs.azurebfs.services.ReadBufferManager.testResetReadBufferManager(int,
 int)  At ReadBufferManager.java:from instance method 
org.apache.hadoop.fs.azurebfs.services.ReadBufferManager.testResetReadBufferManager(int,
 int)  At ReadBufferManager.java:[line 514] |
   |  |  Write to static field 
org.apache.hadoop.fs.azurebfs.services.ReadBufferManager.bufferManager from 
instance method 

[jira] [Updated] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17301:

Labels: pull-request-available  (was: )

> ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
> 
>
> Key: HADOOP-17301
> URL: https://issues.apache.org/jira/browse/HADOOP-17301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When reads done by readahead buffers failed, the exceptions where dropped and 
> the failure was not getting reported to the calling app. 
> Jira HADOOP-16852: Report read-ahead error back
> tried to handle the scenario by reporting the error back to calling app. But 
> the commit has introduced a bug which can lead to ReadBuffer being injected 
> into read completed queue twice. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17301?focusedWorklogId=497030=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-497030
 ]

ASF GitHub Bot logged work on HADOOP-17301:
---

Author: ASF GitHub Bot
Created on: 08/Oct/20 01:05
Start Date: 08/Oct/20 01:05
Worklog Time Spent: 10m 
  Work Description: snvijaya opened a new pull request #2369:
URL: https://github.com/apache/hadoop/pull/2369


   When reads done by readahead buffers failed, the exceptions where dropped 
and the failure was not getting reported to the calling app. 
   Jira HADOOP-16852: Report read-ahead error back
   tried to handle the scenario by reporting the error back to calling app. But 
the commit has introduced a bug which can lead to ReadBuffer being injected 
into read completed queue twice when it has finished the store operation.
   
   Additionally, in a scenario where all readahead buffers are exhausted and 
the buffer chosen to evict is one which is failed read, there is no buffer 
returned for other reads to use. But successful eviction leads the queuing 
logic to determine there is a free buffer and while fetching the buffer index 
from free list, can lead to EmptyStack exceptions. 
   
   This PR fixes both these issues and also has added test checks for both 
scenarios. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 497030)
Remaining Estimate: 0h
Time Spent: 10m

> ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back
> 
>
> Key: HADOOP-17301
> URL: https://issues.apache.org/jira/browse/HADOOP-17301
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Critical
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When reads done by readahead buffers failed, the exceptions where dropped and 
> the failure was not getting reported to the calling app. 
> Jira HADOOP-16852: Report read-ahead error back
> tried to handle the scenario by reporting the error back to calling app. But 
> the commit has introduced a bug which can lead to ReadBuffer being injected 
> into read completed queue twice. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya opened a new pull request #2369: HADOOP-17301. ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back

2020-10-07 Thread GitBox


snvijaya opened a new pull request #2369:
URL: https://github.com/apache/hadoop/pull/2369


   When reads done by readahead buffers failed, the exceptions where dropped 
and the failure was not getting reported to the calling app. 
   Jira HADOOP-16852: Report read-ahead error back
   tried to handle the scenario by reporting the error back to calling app. But 
the commit has introduced a bug which can lead to ReadBuffer being injected 
into read completed queue twice when it has finished the store operation.
   
   Additionally, in a scenario where all readahead buffers are exhausted and 
the buffer chosen to evict is one which is failed read, there is no buffer 
returned for other reads to use. But successful eviction leads the queuing 
logic to determine there is a free buffer and while fetching the buffer index 
from free list, can lead to EmptyStack exceptions. 
   
   This PR fixes both these issues and also has added test checks for both 
scenarios. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17301) ABFS: Fix bug introduced in HADOOP-16852 which reports read-ahead error back

2020-10-07 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17301:
--

 Summary: ABFS: Fix bug introduced in HADOOP-16852 which reports 
read-ahead error back
 Key: HADOOP-17301
 URL: https://issues.apache.org/jira/browse/HADOOP-17301
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan


When reads done by readahead buffers failed, the exceptions where dropped and 
the failure was not getting reported to the calling app. 

Jira HADOOP-16852: Report read-ahead error back

tried to handle the scenario by reporting the error back to calling app. But 
the commit has introduced a bug which can lead to ReadBuffer being injected 
into read completed queue twice. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17296) ABFS: Allow Random Reads to be of Buffer Size

2020-10-07 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17296:
---
Status: Patch Available  (was: Open)

> ABFS: Allow Random Reads to be of Buffer Size
> -
>
> Key: HADOOP-17296
> URL: https://issues.apache.org/jira/browse/HADOOP-17296
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17151) upgrade jetty to 9.4.21

2020-10-07 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209914#comment-17209914
 ] 

Wei-Chiu Chuang commented on HADOOP-17151:
--

I am not aware of this issue before. But like stated in the jetty github issue, 
it's likely a regression in Jetty 9.4, and we only recently updated Jetty in 
Hadoop 3.3.0.

Jetty 9.4 latest is 9.4.32. Can we use the latest instead?

> upgrade jetty to 9.4.21
> ---
>
> Key: HADOOP-17151
> URL: https://issues.apache.org/jira/browse/HADOOP-17151
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 3.3.0
>Reporter: liusheng
>Priority: Major
> Attachments: HADOOP-17151.001.patch
>
>
> I have tried to configure and start Hadoop KMS service, it was failed to 
> start the error log messages:
> {noformat}
> 2020-07-23 10:57:31,872 INFO  Server - jetty-9.4.20.v20190813; built: 
> 2019-08-13T21:28:18.144Z; git: 84700530e645e812b336747464d6fbbf370c9a20; jvm 
> 1.8.0_252-8u252-b09-1~18.04-b09
> 2020-07-23 10:57:31,899 INFO  session - DefaultSessionIdManager 
> workerName=node0
> 2020-07-23 10:57:31,899 INFO  session - No SessionScavenger set, using 
> defaults
> 2020-07-23 10:57:31,901 INFO  session - node0 Scavenging every 66ms
> 2020-07-23 10:57:31,912 INFO  ContextHandler - Started 
> o.e.j.s.ServletContextHandler@5bf0d49{logs,/logs,file:///opt/hadoop-3.4.0-SNAPSHOT/logs/,AVAILABLE}
> 2020-07-23 10:57:31,913 INFO  ContextHandler - Started 
> o.e.j.s.ServletContextHandler@7c7a06ec{static,/static,jar:file:/opt/hadoop-3.4.0-SNAPSHOT/share/hadoop/common/hadoop-kms-3.4.0-SNAPSHOT.jar!/webapps/static,AVAILABLE}
> 2020-07-23 10:57:31,986 INFO  TypeUtil - JVM Runtime does not support Modules
> 2020-07-23 10:57:32,015 INFO  KMSWebApp - 
> -
> 2020-07-23 10:57:32,015 INFO  KMSWebApp -   Java runtime version : 
> 1.8.0_252-8u252-b09-1~18.04-b09
> 2020-07-23 10:57:32,015 INFO  KMSWebApp -   User: hadoop
> 2020-07-23 10:57:32,015 INFO  KMSWebApp -   KMS Hadoop Version: 3.4.0-SNAPSHOT
> 2020-07-23 10:57:32,015 INFO  KMSWebApp - 
> -
> 2020-07-23 10:57:32,023 INFO  KMSACLs - 'CREATE' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'DELETE' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'ROLLOVER' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'GET' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'GET_KEYS' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'GET_METADATA' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'SET_KEY_MATERIAL' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'GENERATE_EEK' ACL '*'
> 2020-07-23 10:57:32,024 INFO  KMSACLs - 'DECRYPT_EEK' ACL '*'
> 2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 'READ' is 
> set to '*'
> 2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 
> 'MANAGEMENT' is set to '*'
> 2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 
> 'GENERATE_EEK' is set to '*'
> 2020-07-23 10:57:32,025 INFO  KMSACLs - default.key.acl. for KEY_OP 
> 'DECRYPT_EEK' is set to '*'
> 2020-07-23 10:57:32,080 INFO  KMSAudit - Initializing audit logger class 
> org.apache.hadoop.crypto.key.kms.server.SimpleKMSAuditLogger
> 2020-07-23 10:57:32,537 INFO  KMSWebServer - SHUTDOWN_MSG:
> /
> SHUTDOWN_MSG: Shutting down KMSWebServer at 
> hadoop-benchmark/172.17.0.2{noformat}
> I have googled the error and found there is a simlar issue: 
> [https://github.com/eclipse/jetty.project/issues/4064]
> It looks like a bug of jetty and has  been fixed in jetty>=9.4.21, currently 
> Hadoop use the jetty is version of 9.4.20, see hadoop-project/pom.xml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size

2020-10-07 Thread GitBox


snvijaya commented on pull request #2368:
URL: https://github.com/apache/hadoop/pull/2368#issuecomment-705234401


   Test results from accounts in East US 2 regions:
   
   ### NON-HNS:
   
SharedKey:
[INFO] Tests run: 88, Failures: 0, Errors: 0, Skipped: 0
[WARNING] Tests run: 458, Failures: 0, Errors: 0, Skipped: 245
[ERROR] Errors: 
[ERROR]   
ITestAbfsFileSystemContractGetFileStatus>AbstractContractGetFileStatusTest.testComplexDirActions:153->AbstractContractGetFileStatusTest.checkListStatusIteratorComplexDir:191
 » IllegalState
[ERROR]   
ITestAbfsFileSystemContractGetFileStatus>AbstractContractGetFileStatusTest.testListStatusIteratorFile:366
 » IllegalState
[ERROR] Tests run: 208, Failures: 0, Errors: 2, Skipped: 24
ERROR] 
testComplexDirActions(org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractGetFileStatus)
  Time elapsed: 31.122 s  <<< ERROR!
java.lang.IllegalStateException: No more items in 
iterator
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:507)
at 
org.apache.hadoop.fs.FileSystem$DirListingIterator.next(FileSystem.java:2232)
at 
org.apache.hadoop.fs.FileSystem$DirListingIterator.next(FileSystem.java:2205)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.iteratorToListThroughNextCallsAlone(ContractTestUtils.java:1494)
at 
org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest.checkListStatusIteratorComplexDir(AbstractContractGetFileStatusTest.java:191)
at 
org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest.testComplexDirActions(AbstractContractGetFileStatusTest.java:153)
at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
at 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
   
[ERROR] 
testListStatusIteratorFile(org.apache.hadoop.fs.azurebfs.contract.ITestAbfsFileSystemContractGetFileStatus)
  Time elapsed: 3.038 s  <<< ERROR!
java.lang.IllegalStateException: No more items in 
iterator
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:507)
at 
org.apache.hadoop.fs.FileSystem$DirListingIterator.next(FileSystem.java:2232)
at 
org.apache.hadoop.fs.FileSystem$DirListingIterator.next(FileSystem.java:2205)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.iteratorToListThroughNextCallsAlone(ContractTestUtils.java:1494)
at 
org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest.testListStatusIteratorFile(AbstractContractGetFileStatusTest.java:366)
at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at 
java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Commented] (HADOOP-17223) update org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13

2020-10-07 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209911#comment-17209911
 ] 

Wei-Chiu Chuang commented on HADOOP-17223:
--

+1 for trunk. I prefer to stay on the latest dependency versions for the next 
Hadoop minor release.

> update  org.apache.httpcomponents:httpclient to 4.5.12 and httpcore to 4.4.13
> -
>
> Key: HADOOP-17223
> URL: https://issues.apache.org/jira/browse/HADOOP-17223
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.3.0
>Reporter: Pranav Bheda
>Priority: Major
> Attachments: HADOOP-17223.001.patch
>
>
> Update the dependencies
>  * org.apache.httpcomponents:httpclient from 4.5.6 to 4.5.12
>  * org.apache.httpcomponents:httpcore from 4.4.10 to 4.4.13



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya opened a new pull request #2368: Hadoop-17296. ABFS: Force reads to be always of buffer size

2020-10-07 Thread GitBox


snvijaya opened a new pull request #2368:
URL: https://github.com/apache/hadoop/pull/2368


   Customers migrating from Gen1 to Gen2 often are observing different read 
patterns for the same workload. The optimization in Gen2 which reads only 
requested data size once detected as random read pattern is usually the cause 
of difference.
   
   In this PR, config option to force Gen2 driver to read always in buffer size 
even for random is being introduced. With this enabled the read pattern for the 
job will be similar to Gen1 and be full buffer sizes to backend.
   
   Have also accommodated the request to config control the readahead size to 
help cases such as small row groups in parquet files, where more data can be 
captured.
   
   These configs are not determined to be performant on the official parquet 
recommended row group sizes of 512-1024 MB and hence will not be enabled by 
default. 
   
   Tests are added to verify various combinations of config values. Also 
modified tests in file ITestAzureBlobFileSystemRandomRead which were using same 
file and hence test debugging was getting harder.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17296) ABFS: Allow Random Reads to be of Buffer Size

2020-10-07 Thread Sneha Vijayarajan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209897#comment-17209897
 ] 

Sneha Vijayarajan commented on HADOOP-17296:


[~mukund-thakur] - 

Readahead.range will provide a static increase to whatever the read size will 
be for the requested read, which makes the read to store to be of different 
size. 

 The specific case mentioned in description was a pattern observed for a 
parquet file which had very small row group size, which I gather isnt an 
optimal structure for parquet file. Gen1 job run was more performant as it was 
reading a full buffer and a buffer read ended up reading more row groups. 

Gen2's random read logic ended up triggering more IOPs as when random it reads 
only the requested bytes. To highlight that it was the randomness of read 
pattern that lead to high job runtime and more IOPs, forcing Gen2 to read a 
full buffer like Gen1 helped. 

But reading a full buffer for every random read is definitely not ideal esp a 
blocking read call for app. Hence the configs that enforce a full buffer read 
will be set to false by default. We get similar asks for comparisons between 
Gen1 to Gen2 for same workloads, and we are hoping that rerun of the workload 
with this config turned on will be easier to get that information with the IO 
pattern matching.

Readahead.range which will be a consistent amount of data read ahead on top of 
the varying requested read size is definitely a better solution for a 
performant random read on Gen2 and we should pursue on that. 

 

 

> ABFS: Allow Random Reads to be of Buffer Size
> -
>
> Key: HADOOP-17296
> URL: https://issues.apache.org/jira/browse/HADOOP-17296
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: abfsactive
>
> ADLS Gen2/ABFS driver is optimized to read only the bytes that are requested 
> for when the read pattern is random. 
> It was observed in some spark jobs that though the reads are random, the next 
> read doesn't skip by a lot and can be served by the earlier read if read was 
> done in buffer size. As a result the job triggered a higher count of read 
> calls and resulted in higher job runtime.
> When these jobs were run against Gen1 which always reads in buffer size , the 
> jobs fared well. 
> In this Jira we try to provide a control over config on random read to be of 
> requested size or buffer size.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


goiri commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501316133



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1901,4 +1904,27 @@ private DFSClient getFileDFSClient(final String path) {
 }
 return null;
   }
+
+  @Test
+  public void testMkdirsWithCallerContext() throws IOException {
+GenericTestUtils.LogCapturer auditlog =
+GenericTestUtils.LogCapturer.captureLogs(FSNamesystem.auditLog);
+
+// Current callerContext is null
+assertNull(CallerContext.getCurrent());
+
+// Set client context
+CallerContext.setCurrent(
+new CallerContext.Builder("clientContext").build());
+
+// Create a directory via the router
+String dirPath = "/test_dir_with_callercontext";
+FsPermission permission = new FsPermission("755");
+routerProtocol.mkdirs(dirPath, permission, false);
+
+// The audit log should contains "callerContext=clientContext,clientIp:"
+assertTrue(auditlog.getOutput()
+.contains("callerContext=clientContext,clientIp:"));

Review comment:
   Correct, grabbing the proper Client IP is not trivial and error prone so 
I'm fine with this.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16990) Update Mockserver

2020-10-07 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209849#comment-17209849
 ] 

Wei-Chiu Chuang commented on HADOOP-16990:
--

Yeah... I think we only have git PR precommit set up for some of the branches. 
(specifically, 3.3 and above)
Can you post a patch file for branch-3.3 to branch-3.1?

> Update Mockserver
> -
>
> Key: HADOOP-16990
> URL: https://issues.apache.org/jira/browse/HADOOP-16990
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wei-Chiu Chuang
>Assignee: Attila Doroszlai
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HADOOP-16990.001.patch
>
>
> We are on Mockserver 3.9.2 which is more than 5 years old. Time to update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=496890=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496890
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 19:34
Start Date: 07/Oct/20 19:34
Worklog Time Spent: 10m 
  Work Description: saintstack commented on a change in pull request #2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501260800



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   @vinayakumarb 
   
   >  Since we commit the replaced code itself...
   
   Good.
   
   Ok on the isolation of grpc and dependencis (Can later try shading grpc 
later if a problem)
   
   Thanks for update on curator (shading curator would be a pain. It does 
shading itself IIRC). Hive also has guava 11 in its API apparently.
   
   > We are fortunate enough that, these dependencies are not part of any 
exposed APIs.
   Amen
   
   @ayushtkn 
   
   > So, I think we can keep guava version as is ...
   
   At 27? (Or at 11). Sorry, had trouble grokking here.
   
   > Since reverting back would be problem for those, who don't have there own 
guava and rely on hadoop to provide guava, there code might break.
   
   I think this would be ok -- i.e. breakage -- especially in a new minor 
version, 3.4.0. The fix is easy enough for the downstream, just add guava 
dependency.
   
   > 
   So, I think we can keep the reversal separate from shading and if the 
community is OK with going back, we will revert back in a separate jira
   
   Sounds good.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496890)
Time Spent: 1h 10m  (was: 1h)

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #2342: HADOOP-17288. Use shaded guava from thirdparty.

2020-10-07 Thread GitBox


saintstack commented on a change in pull request #2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501260800



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   @vinayakumarb 
   
   >  Since we commit the replaced code itself...
   
   Good.
   
   Ok on the isolation of grpc and dependencis (Can later try shading grpc 
later if a problem)
   
   Thanks for update on curator (shading curator would be a pain. It does 
shading itself IIRC). Hive also has guava 11 in its API apparently.
   
   > We are fortunate enough that, these dependencies are not part of any 
exposed APIs.
   Amen
   
   @ayushtkn 
   
   > So, I think we can keep guava version as is ...
   
   At 27? (Or at 11). Sorry, had trouble grokking here.
   
   > Since reverting back would be problem for those, who don't have there own 
guava and rely on hadoop to provide guava, there code might break.
   
   I think this would be ok -- i.e. breakage -- especially in a new minor 
version, 3.4.0. The fix is easy enough for the downstream, just add guava 
dependency.
   
   > 
   So, I think we can keep the reversal separate from shading and if the 
community is OK with going back, we will revert back in a separate jira
   
   Sounds good.
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=496858=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496858
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 19:03
Start Date: 07/Oct/20 19:03
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501244056



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   Thanx @vinayakumarb and @saintstack 
   Since the discussion is going on here, I thought to get everything here only.
   Regarding guava revert to 11, I think, as you mentioned that would be 
incompatible, and downstream would be excluding them only, when adding 
hadoop-jars as dependency. So, In the present state there won't be any problem 
with the version, since the hadoop jars aren't using it, and for the other 
dependencies they are not getting affected by this version, as tests passed 
both with 27 and 11. So, I think we can keep guava version as is and let 
downstream exclude while including hadoop-jars(most projects already do that) 
and run happily with there own guava versions.
   
   Since reverting back would be problem for those, who don't have there own 
guava and rely on hadoop to provide guava, there code might break. 
   
   So, I think we can keep the reversal separate from shading and if the 
community is OK with going back, we will revert back in a separate jira





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496858)
Time Spent: 1h  (was: 50m)

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2342: HADOOP-17288. Use shaded guava from thirdparty.

2020-10-07 Thread GitBox


ayushtkn commented on a change in pull request #2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501244056



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   Thanx @vinayakumarb and @saintstack 
   Since the discussion is going on here, I thought to get everything here only.
   Regarding guava revert to 11, I think, as you mentioned that would be 
incompatible, and downstream would be excluding them only, when adding 
hadoop-jars as dependency. So, In the present state there won't be any problem 
with the version, since the hadoop jars aren't using it, and for the other 
dependencies they are not getting affected by this version, as tests passed 
both with 27 and 11. So, I think we can keep guava version as is and let 
downstream exclude while including hadoop-jars(most projects already do that) 
and run happily with there own guava versions.
   
   Since reverting back would be problem for those, who don't have there own 
guava and rely on hadoop to provide guava, there code might break. 
   
   So, I think we can keep the reversal separate from shading and if the 
community is OK with going back, we will revert back in a separate jira





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


ferhui commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501240102



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1901,4 +1904,27 @@ private DFSClient getFileDFSClient(final String path) {
 }
 return null;
   }
+
+  @Test
+  public void testMkdirsWithCallerContext() throws IOException {
+GenericTestUtils.LogCapturer auditlog =
+GenericTestUtils.LogCapturer.captureLogs(FSNamesystem.auditLog);
+
+// Current callerContext is null
+assertNull(CallerContext.getCurrent());
+
+// Set client context
+CallerContext.setCurrent(
+new CallerContext.Builder("clientContext").build());
+
+// Create a directory via the router
+String dirPath = "/test_dir_with_callercontext";
+FsPermission permission = new FsPermission("755");
+routerProtocol.mkdirs(dirPath, permission, false);
+
+// The audit log should contains "callerContext=clientContext,clientIp:"
+assertTrue(auditlog.getOutput()
+.contains("callerContext=clientContext,clientIp:"));

Review comment:
   Sorry, do not understand it.
   I think If we check the actual IP, we should get the client actual IP, e.g 
"w.x.y.z", and then check the audit log, it should contain 
"callerContext=clientContext,clientIp:w.x.y.z", is it right?
   Now it's hard to get client ip. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17271) S3A statistics to support IOStatistics

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17271?focusedWorklogId=496848=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496848
 ]

ASF GitHub Bot logged work on HADOOP-17271:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 18:45
Start Date: 07/Oct/20 18:45
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-705124775


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 42 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m  6s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 16s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 52s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m  2s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m  0s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 50s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/13/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 
292)  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   5m 15s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 52s |  |  hadoop-common in the patch 
passed.  |
   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2324: HADOOP-17271. S3A statistics to support IOStatistics

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2324:
URL: https://github.com/apache/hadoop/pull/2324#issuecomment-705124775


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  3s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 42 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m  6s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 54s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 40s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  17m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 54s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 16s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   4m 52s |  |  trunk passed  |
   | -0 :warning: |  patch  |   1m 38s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m  2s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  19m  2s |  |  
root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 0 new + 2048 unchanged - 
1 fixed = 2048 total (was 2049)  |
   | +1 :green_heart: |  compile  |  17m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m  0s |  |  
root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 1941 unchanged - 
1 fixed = 1941 total (was 1942)  |
   | -0 :warning: |  checkstyle  |   2m 50s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2324/13/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 5 new + 267 unchanged - 25 fixed = 272 total (was 
292)  |
   | +1 :green_heart: |  mvnsite  |   3m 17s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  14m  9s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  hadoop-common in the patch 
passed with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  hadoop-mapreduce-client-core 
in the patch passed with JDK Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.  |
   | +1 :green_heart: |  javadoc  |   0m 44s |  |  
hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 
with JDK Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 0 new + 
0 unchanged - 4 fixed = 0 total (was 4)  |
   | +1 :green_heart: |  findbugs  |   5m 15s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 52s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   7m  4s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 40s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 192m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 

[GitHub] [hadoop] rakeshadr merged pull request #2366: HDFS-15253 Default checkpoint transfer speed, 50mb per second

2020-10-07 Thread GitBox


rakeshadr merged pull request #2366:
URL: https://github.com/apache/hadoop/pull/2366


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


goiri commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501230553



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1901,4 +1904,27 @@ private DFSClient getFileDFSClient(final String path) {
 }
 return null;
   }
+
+  @Test
+  public void testMkdirsWithCallerContext() throws IOException {
+GenericTestUtils.LogCapturer auditlog =
+GenericTestUtils.LogCapturer.captureLogs(FSNamesystem.auditLog);
+
+// Current callerContext is null
+assertNull(CallerContext.getCurrent());
+
+// Set client context
+CallerContext.setCurrent(
+new CallerContext.Builder("clientContext").build());
+
+// Create a directory via the router
+String dirPath = "/test_dir_with_callercontext";
+FsPermission permission = new FsPermission("755");
+routerProtocol.mkdirs(dirPath, permission, false);
+
+// The audit log should contains "callerContext=clientContext,clientIp:"
+assertTrue(auditlog.getOutput()
+.contains("callerContext=clientContext,clientIp:"));

Review comment:
   The only issue I see is to grab random logs but I guess is fine.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17295) Move dedicated pre-logging statements into existing logging guards

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17295?focusedWorklogId=496844=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496844
 ]

ASF GitHub Bot logged work on HADOOP-17295:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 18:38
Start Date: 07/Oct/20 18:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#issuecomment-705121286


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 30s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 29s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 38s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 42s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 45s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  |  hadoop-registry in the patch 
passed.  |
   | -1 :x: |  unit  | 111m 23s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2358/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  22m  5s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 320m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestDFSShell |
   |   | 
hadoop.hdfs.server.blockmanagement.TestAvailableSpaceRackFaultTolerantBPP |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestDFSInputStream |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2358: HADOOP-17295 Move dedicated pre-logging statements into existing logg…

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#issuecomment-705121286


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m  3s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  26m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  18m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 56s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 45s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 36s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 30s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   5m 29s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 16s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  20m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  17m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  17m 38s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 42s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   2m 34s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   5m 45s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 13s |  |  hadoop-registry in the patch 
passed.  |
   | -1 :x: |  unit  | 111m 23s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2358/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  unit  |  22m  5s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 54s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 320m 54s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestDFSShell |
   |   | 
hadoop.hdfs.server.blockmanagement.TestAvailableSpaceRackFaultTolerantBPP |
   |   | hadoop.hdfs.server.namenode.TestNameNodeMXBean |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped |
   |   | hadoop.hdfs.TestDFSInputStream |
   |   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
   |   | hadoop.hdfs.server.blockmanagement.TestBlockManager |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestGetFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2358/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2358 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | 

[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=496839=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496839
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 18:36
Start Date: 07/Oct/20 18:36
Worklog Time Spent: 10m 
  Work Description: vinayakumarb commented on a change in pull request 
#2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501229119



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   > A few questions so I can better understand the suggested approach:
   > 
   > So source would continue to refer to unshaded guava? (unshaded guava 11?)
   Hadoop source uses shaded guava itself from hadoop-thirdparty.
   > 
   > Rather than rewrite once as this PR does, we'd rewrite on every build? How 
long does the rewrite take?
   > 
   This replacer is basically to make the Job easy for this PR.  Since we 
commit the replaced code itself, further replaces wont happen on same file. It 
will help to catch further references to unshaded guava. Rewrite doesnot take 
much time, it finishes in fraction of second. 
   
   > hadoop-yarn-csi is bound to guava-20? Why is that? The grpc used? 
Interested in how a guava 20 in classpath won't mess up downstreamers who want 
to use something newer (or older).
   
   yes, yarn-csi uses grpc. And grpc refers to guava-20 and it uses protobuf 
3.x even before protobuf was upgraded for entire hadoop. It keeps separate 
directory (share/hadoop/yarn/csi/lib) for its dependencies and existing scripts 
doesnt add this directory into classpath of other processes.
   
   > 
   > For 4., hopefully we can (later) add an enforcer so non-shaded references 
get flagged at build time. Rather than have hadoop depend on something it 
doesn't use, hopefully we can work on fixing the curator dependency to fix its 
pom?
   > 
   yes, If we could add this enforcer, may be we need not find and replace 
unshaded usage of guava on every build.
   For curator, curator community is not ready to shade 3 classes of guava as 
they are part of their API. These classes seems to be pretty stable across 
versions. So curator does work with guava-11 in classpath.
   If we want to fix this and completely remove hadoop's dependency on guava, 
we may need to shade curator as well in hadoop-thirdparty and use that in 
hadoop.
   We are fortunate enough that, these dependencies are not part of any exposed 
APIs.
   
   > Thanks @vinayakumarb
   
   Thanks @saintstack 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496839)
Time Spent: 50m  (was: 40m)

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] vinayakumarb commented on a change in pull request #2342: HADOOP-17288. Use shaded guava from thirdparty.

2020-10-07 Thread GitBox


vinayakumarb commented on a change in pull request #2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501229119



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   > A few questions so I can better understand the suggested approach:
   > 
   > So source would continue to refer to unshaded guava? (unshaded guava 11?)
   Hadoop source uses shaded guava itself from hadoop-thirdparty.
   > 
   > Rather than rewrite once as this PR does, we'd rewrite on every build? How 
long does the rewrite take?
   > 
   This replacer is basically to make the Job easy for this PR.  Since we 
commit the replaced code itself, further replaces wont happen on same file. It 
will help to catch further references to unshaded guava. Rewrite doesnot take 
much time, it finishes in fraction of second. 
   
   > hadoop-yarn-csi is bound to guava-20? Why is that? The grpc used? 
Interested in how a guava 20 in classpath won't mess up downstreamers who want 
to use something newer (or older).
   
   yes, yarn-csi uses grpc. And grpc refers to guava-20 and it uses protobuf 
3.x even before protobuf was upgraded for entire hadoop. It keeps separate 
directory (share/hadoop/yarn/csi/lib) for its dependencies and existing scripts 
doesnt add this directory into classpath of other processes.
   
   > 
   > For 4., hopefully we can (later) add an enforcer so non-shaded references 
get flagged at build time. Rather than have hadoop depend on something it 
doesn't use, hopefully we can work on fixing the curator dependency to fix its 
pom?
   > 
   yes, If we could add this enforcer, may be we need not find and replace 
unshaded usage of guava on every build.
   For curator, curator community is not ready to shade 3 classes of guava as 
they are part of their API. These classes seems to be pretty stable across 
versions. So curator does work with guava-11 in classpath.
   If we want to fix this and completely remove hadoop's dependency on guava, 
we may need to shade curator as well in hadoop-thirdparty and use that in 
hadoop.
   We are fortunate enough that, these dependencies are not part of any exposed 
APIs.
   
   > Thanks @vinayakumarb
   
   Thanks @saintstack 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=496826=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496826
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 18:25
Start Date: 07/Oct/20 18:25
Worklog Time Spent: 10m 
  Work Description: dbtsai commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705114513


   Gently ping @steveloughran This is almost identical to SnappyCodec one you 
merged. Could you help to review it? Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496826)
Time Spent: 2h 10m  (was: 2h)

> Using lz4-java in Lz4Codec
> --
>
> Key: HADOOP-17292
> URL: https://issues.apache.org/jira/browse/HADOOP-17292
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for lz4 codec which has several disadvantages:
> It requires native libhadoop to be installed in system LD_LIBRARY_PATH, and 
> they have to be installed separately on each node of the clusters, container 
> images, or local test environments which adds huge complexities from 
> deployment point of view. In some environments, it requires compiling the 
> natives from sources which is non-trivial. Also, this approach is platform 
> dependent; the binary may not work in different platform, so it requires 
> recompilation.
> It requires extra configuration of java.library.path to load the natives, and 
> it results higher application deployment and maintenance cost for users.
> Projects such as Spark use [lz4-java|https://github.com/lz4/lz4-java] which 
> is JNI-based implementation. It contains native binaries in jar file, and it 
> can automatically load the native binaries into JVM from jar without any 
> setup. If a native implementation can not be found for a platform, it can 
> fallback to pure-java implementation of lz4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-07 Thread GitBox


dbtsai commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705114513


   Gently ping @steveloughran This is almost identical to SnappyCodec one you 
merged. Could you help to review it? Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13551) hook up AwsSdkMetrics to hadoop metrics

2020-10-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209759#comment-17209759
 ] 

Steve Loughran commented on HADOOP-13551:
-

One aspect of wiring this up is we will find about throttle events being 
handled in the SDK.

I propose handling those specially, with
* logging @ warn with the request ID, operation & URL
* updating an FS-wide stat
* maybe: some plugin point for someone to add their own reporter

the other thing to consider is could we support adding the XRay instrumentation 
to the client, which would then do the publishing that way

> hook up AwsSdkMetrics to hadoop metrics
> ---
>
> Key: HADOOP-13551
> URL: https://issues.apache.org/jira/browse/HADOOP-13551
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> There's an API in {{com.amazonaws.metrics.AwsSdkMetrics}} to give access to 
> the internal metrics of the AWS libraries. We might want to get at those



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] saintstack commented on a change in pull request #2342: HADOOP-17288. Use shaded guava from thirdparty.

2020-10-07 Thread GitBox


saintstack commented on a change in pull request #2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501216538



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   A few questions so I can better understand the suggested approach:
   
   So source would continue to refer to unshaded guava? (unshaded guava 11?)
   
   Rather than rewrite once as this PR does, we'd rewrite on every build? How 
long does the rewrite take?
   
   hadoop-yarn-csi is bound to guava-20? Why is that? The grpc used? Interested 
in how a guava 20 in classpath won't mess up downstreamers who want to use 
something newer (or older).
   
   For 4., hopefully we can (later) add an enforcer so non-shaded references 
get flagged at build time. Rather than have hadoop depend on something it 
doesn't use, hopefully we can work on fixing the curator dependency to fix its 
pom?
   
   Thanks @vinayakumarb 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=496823=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496823
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 18:17
Start Date: 07/Oct/20 18:17
Worklog Time Spent: 10m 
  Work Description: saintstack commented on a change in pull request #2342:
URL: https://github.com/apache/hadoop/pull/2342#discussion_r501216538



##
File path: hadoop-common-project/hadoop-auth/pom.xml
##
@@ -234,6 +235,24 @@
   
${basedir}/dev-support/findbugsExcludeFile.xml
 
   
+  
+com.google.code.maven-replacer-plugin
+replacer
+
+  
+replace-sources
+
+  false
+
+  
+  
+replace-test-sources
+
+  false
+
+  
+
+  

Review comment:
   A few questions so I can better understand the suggested approach:
   
   So source would continue to refer to unshaded guava? (unshaded guava 11?)
   
   Rather than rewrite once as this PR does, we'd rewrite on every build? How 
long does the rewrite take?
   
   hadoop-yarn-csi is bound to guava-20? Why is that? The grpc used? Interested 
in how a guava 20 in classpath won't mess up downstreamers who want to use 
something newer (or older).
   
   For 4., hopefully we can (later) add an enforcer so non-shaded references 
get flagged at build time. Rather than have hadoop depend on something it 
doesn't use, hopefully we can work on fixing the curator dependency to fix its 
pom?
   
   Thanks @vinayakumarb 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496823)
Time Spent: 40m  (was: 0.5h)

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2323: HADOOP-16830. Add public IOStatistics API.

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-705108526


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 16s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 13s |  |  trunk passed  |
   | -0 :warning: |  patch  |   2m 36s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  18m 59s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/9/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 20 new + 2050 unchanged 
- 0 fixed = 2070 total (was 2050)  |
   | +1 :green_heart: |  compile  |  16m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  16m 58s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/9/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 20 new + 1946 
unchanged - 0 fixed = 1966 total (was 1946)  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/diff-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/9/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 4 new + 141 
unchanged - 4 fixed = 145 total (was 145)  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 22s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 45s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m 40s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/9/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2323 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | 

[jira] [Work logged] (HADOOP-16830) Add public IOStatistics API

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16830?focusedWorklogId=496821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496821
 ]

ASF GitHub Bot logged work on HADOOP-16830:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 18:14
Start Date: 07/Oct/20 18:14
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2323:
URL: https://github.com/apache/hadoop/pull/2323#issuecomment-705108526


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 31s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 11 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  29m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  19m 42s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  16m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   2m 16s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   2m 13s |  |  trunk passed  |
   | -0 :warning: |  patch  |   2m 36s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | -1 :x: |  javac  |  18m 59s | 
[/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/9/artifact/out/diff-compile-javac-root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1.txt)
 |  root-jdkUbuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 generated 20 new + 2050 unchanged 
- 0 fixed = 2070 total (was 2050)  |
   | +1 :green_heart: |  compile  |  16m 58s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | -1 :x: |  javac  |  16m 58s | 
[/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/9/artifact/out/diff-compile-javac-root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01.txt)
 |  root-jdkPrivateBuild-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 generated 20 new + 1946 
unchanged - 0 fixed = 1966 total (was 1946)  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/diff-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2323/9/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 4 new + 141 
unchanged - 4 fixed = 145 total (was 145)  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 11s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   2m 22s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   9m 45s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 56s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 161m 40s |  |  |
   

[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=496808=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496808
 ]

ASF GitHub Bot logged work on HADOOP-17281:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 18:01
Start Date: 07/Oct/20 18:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-705034766


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  34m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  25m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 46s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 45s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 42s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  34m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  34m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  28m 38s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   4m 11s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 112 unchanged - 0 fixed = 113 total (was 
112)  |
   | +1 :green_heart: |  mvnsite  |   4m 31s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 35s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  14m 51s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 10s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  7s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestProtoBufRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 424cbd754c37 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2354: HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread GitBox


hadoop-yetus removed a comment on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-705034766


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  34m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  25m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 46s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 45s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 42s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  34m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  34m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  28m 38s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   4m 11s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 112 unchanged - 0 fixed = 113 total (was 
112)  |
   | +1 :green_heart: |  mvnsite  |   4m 31s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 35s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  14m 51s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 10s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  7s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestProtoBufRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 424cbd754c37 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 16aea11c945 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Commented] (HADOOP-17288) Use shaded guava from thirdparty

2020-10-07 Thread Michael Stack (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209740#comment-17209740
 ] 

Michael Stack commented on HADOOP-17288:


[~ayushtkn] talking out loud, 3.2.1 and 3.3.0 shipped w/ guava 27, right? This 
patch is for 3.4.0 and would revert the guava included by hadoop to guava 11 
from 27 (though it wil not use the guava 11 itself). A regression on lib 
version is unexpected. The argument for the revert is that it will make it 
easier on folks running hadoop versions older than 3.2.1/3.3.0 to migrate 
especially for downstreamers like Hive where guava 11 is deeply embedded. Do I 
have that right? If so, I agree w/ this rationale. I suggest that the revert 
needs to be broadcast – the dev list? -- since it such an unusual move. Thanks.

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=496793=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496793
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 17:42
Start Date: 07/Oct/20 17:42
Worklog Time Spent: 10m 
  Work Description: viirya commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705091425


   Rebased. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496793)
Time Spent: 2h  (was: 1h 50m)

> Using lz4-java in Lz4Codec
> --
>
> Key: HADOOP-17292
> URL: https://issues.apache.org/jira/browse/HADOOP-17292
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for lz4 codec which has several disadvantages:
> It requires native libhadoop to be installed in system LD_LIBRARY_PATH, and 
> they have to be installed separately on each node of the clusters, container 
> images, or local test environments which adds huge complexities from 
> deployment point of view. In some environments, it requires compiling the 
> natives from sources which is non-trivial. Also, this approach is platform 
> dependent; the binary may not work in different platform, so it requires 
> recompilation.
> It requires extra configuration of java.library.path to load the natives, and 
> it results higher application deployment and maintenance cost for users.
> Projects such as Spark use [lz4-java|https://github.com/lz4/lz4-java] which 
> is JNI-based implementation. It contains native binaries in jar file, and it 
> can automatically load the native binaries into JVM from jar without any 
> setup. If a native implementation can not be found for a platform, it can 
> fallback to pure-java implementation of lz4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-07 Thread GitBox


viirya commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705091425


   Rebased. Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


ferhui commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501185702



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1901,4 +1904,27 @@ private DFSClient getFileDFSClient(final String path) {
 }
 return null;
   }
+
+  @Test
+  public void testMkdirsWithCallerContext() throws IOException {
+GenericTestUtils.LogCapturer auditlog =
+GenericTestUtils.LogCapturer.captureLogs(FSNamesystem.auditLog);
+
+// Current callerContext is null
+assertNull(CallerContext.getCurrent());
+
+// Set client context
+CallerContext.setCurrent(
+new CallerContext.Builder("clientContext").build());
+
+// Create a directory via the router
+String dirPath = "/test_dir_with_callercontext";
+FsPermission permission = new FsPermission("755");
+routerProtocol.mkdirs(dirPath, permission, false);
+
+// The audit log should contains "callerContext=clientContext,clientIp:"
+assertTrue(auditlog.getOutput()
+.contains("callerContext=clientContext,clientIp:"));

Review comment:
   Or just keep it that way, and do not modify UT





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17292) Using lz4-java in Lz4Codec

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17292?focusedWorklogId=496770=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496770
 ]

ASF GitHub Bot logged work on HADOOP-17292:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 17:19
Start Date: 07/Oct/20 17:19
Worklog Time Spent: 10m 
  Work Description: dbtsai commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705079529


   @viirya can you rebase master?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496770)
Time Spent: 1h 50m  (was: 1h 40m)

> Using lz4-java in Lz4Codec
> --
>
> Key: HADOOP-17292
> URL: https://issues.apache.org/jira/browse/HADOOP-17292
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for lz4 codec which has several disadvantages:
> It requires native libhadoop to be installed in system LD_LIBRARY_PATH, and 
> they have to be installed separately on each node of the clusters, container 
> images, or local test environments which adds huge complexities from 
> deployment point of view. In some environments, it requires compiling the 
> natives from sources which is non-trivial. Also, this approach is platform 
> dependent; the binary may not work in different platform, so it requires 
> recompilation.
> It requires extra configuration of java.library.path to load the natives, and 
> it results higher application deployment and maintenance cost for users.
> Projects such as Spark use [lz4-java|https://github.com/lz4/lz4-java] which 
> is JNI-based implementation. It contains native binaries in jar file, and it 
> can automatically load the native binaries into JVM from jar without any 
> setup. If a native implementation can not be found for a platform, it can 
> fallback to pure-java implementation of lz4.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dbtsai commented on pull request #2350: HADOOP-17292. Using lz4-java in Lz4Codec

2020-10-07 Thread GitBox


dbtsai commented on pull request #2350:
URL: https://github.com/apache/hadoop/pull/2350#issuecomment-705079529


   @viirya can you rebase master?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2365: HDFS-15610 Reduced datanode upgrade/hardlink thread from 12 to 6

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2365:
URL: https://github.com/apache/hadoop/pull/2365#issuecomment-705079088


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 40s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 50s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  8s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 47s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 45s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 45s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 12s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 109m 15s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2365/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 35s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 198m 35s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.TestFileChecksum |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestSafeModeWithStripedFile |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2365/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2365 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux 98fe6dbf219b 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 82522d60fb5 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2365/3/testReport/ |
   | Max. process+thread count | 3199 

[GitHub] [hadoop] ferhui commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


ferhui commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501176989



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1901,4 +1904,27 @@ private DFSClient getFileDFSClient(final String path) {
 }
 return null;
   }
+
+  @Test
+  public void testMkdirsWithCallerContext() throws IOException {
+GenericTestUtils.LogCapturer auditlog =
+GenericTestUtils.LogCapturer.captureLogs(FSNamesystem.auditLog);
+
+// Current callerContext is null
+assertNull(CallerContext.getCurrent());
+
+// Set client context
+CallerContext.setCurrent(
+new CallerContext.Builder("clientContext").build());
+
+// Create a directory via the router
+String dirPath = "/test_dir_with_callercontext";
+FsPermission permission = new FsPermission("755");
+routerProtocol.mkdirs(dirPath, permission, false);
+
+// The audit log should contains "callerContext=clientContext,clientIp:"
+assertTrue(auditlog.getOutput()
+.contains("callerContext=clientContext,clientIp:"));

Review comment:
   Had thought  about this. DFSClient & Client do not expose ip, and 
TestAuditLogger & TestAuditLogs do not check client ip. So do you have any 
suggestions?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-10-07 Thread GitBox


sunchao commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-705055574


   Thanks @steveloughran - could you assign the JIRA to @viirya ? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=496735=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496735
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 16:36
Start Date: 07/Oct/20 16:36
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-705055574


   Thanks @steveloughran - could you assign the JIRA to @viirya ? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496735)
Time Spent: 25h  (was: 24h 50m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 25h
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17298) Backslash in username causes build failure in the environment started by start-build-env.sh.

2020-10-07 Thread Takeru Kuramoto (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takeru Kuramoto updated HADOOP-17298:
-
Status: Patch Available  (was: Open)

> Backslash in username causes build failure in the environment started by 
> start-build-env.sh.
> 
>
> Key: HADOOP-17298
> URL: https://issues.apache.org/jira/browse/HADOOP-17298
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Takeru Kuramoto
>Assignee: Takeru Kuramoto
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a username includes a backslash, `mvn clean install` fails in an 
> environment started by start-build-env.sh.
> Here is my result in Amazon WorkSpaces.
>  
> {code:java}
> CORPbtkuramototkr@b8e750b1e386:/home/CORP\btkuramototkr/hadoop/hadoop-build-to
> ols$ mvn clean install
> /usr/bin/mvn: 1: cd: can't cd to 
> /home/CORtkuramototkr/hadoop/hadoop-build-tools/..
> [INFO] Scanning for projects...
> [INFO] 
> [INFO] < org.apache.hadoop:hadoop-build-tools 
> >
> [INFO] Building Apache Hadoop Build Tools 3.4.0-SNAPSHOT
> [INFO] [ jar 
> ]-
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-build-tools 
> ---
> [INFO] 
> [INFO] --- maven-resources-plugin:3.0.1:copy-resources (copy-resources) @ 
> hadoop-build-tools ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 1.074 s
> [INFO] Finished at: 2020-10-05T02:51:53Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-resources-plugin:3.0.1:copy-resources 
> (copy-resources) on project hadoop-build-tools: Cannot create resource output 
> directory: 
> /home/CORP/btkuramototkr/hadoop/hadoop-build-tools/target/extra-resources -> 
> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
>  
> This problem can be solved by adding an option to change the path to maven's 
> local repository in the container so that users can remove backslashes from 
> their username.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] rakeshadr commented on pull request #2366: HDFS-15253 Default checkpoint transfer speed, 50mb per second

2020-10-07 Thread GitBox


rakeshadr commented on pull request #2366:
URL: https://github.com/apache/hadoop/pull/2366#issuecomment-705048932


   +1 LGTM, test case failures are unrelated to the patch. Will merge it 
shortly.
   
   Thanks @karthikhw for the contribution.
   Thanks @mukul1987 for the reviews.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17298) Backslash in username causes build failure in the environment started by start-build-env.sh.

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17298?focusedWorklogId=496717=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496717
 ]

ASF GitHub Bot logged work on HADOOP-17298:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 16:13
Start Date: 07/Oct/20 16:13
Worklog Time Spent: 10m 
  Work Description: tkuramoto33 opened a new pull request #2367:
URL: https://github.com/apache/hadoop/pull/2367


   If a username includes a backslash, `mvn clean install` fails in an 
environment started by start-build-env.sh.
   
   The causes of this problem are as follows:
   1. start-build-env.sh sets the home directory in the Docker container to 
/home/${USER_NAME}. 
   2. Maven does not support path names such as "/home/CORP\name/". This type 
of path names appear in Amazon WorkSpaces. 
   
   Therefore, I have made it possible for users to change their home directory 
in the Docker container.
   
   I have tested this patch in Amazon WorkSpaces.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496717)
Remaining Estimate: 0h
Time Spent: 10m

> Backslash in username causes build failure in the environment started by 
> start-build-env.sh.
> 
>
> Key: HADOOP-17298
> URL: https://issues.apache.org/jira/browse/HADOOP-17298
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Takeru Kuramoto
>Assignee: Takeru Kuramoto
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a username includes a backslash, `mvn clean install` fails in an 
> environment started by start-build-env.sh.
> Here is my result in Amazon WorkSpaces.
>  
> {code:java}
> CORPbtkuramototkr@b8e750b1e386:/home/CORP\btkuramototkr/hadoop/hadoop-build-to
> ols$ mvn clean install
> /usr/bin/mvn: 1: cd: can't cd to 
> /home/CORtkuramototkr/hadoop/hadoop-build-tools/..
> [INFO] Scanning for projects...
> [INFO] 
> [INFO] < org.apache.hadoop:hadoop-build-tools 
> >
> [INFO] Building Apache Hadoop Build Tools 3.4.0-SNAPSHOT
> [INFO] [ jar 
> ]-
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-build-tools 
> ---
> [INFO] 
> [INFO] --- maven-resources-plugin:3.0.1:copy-resources (copy-resources) @ 
> hadoop-build-tools ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 1.074 s
> [INFO] Finished at: 2020-10-05T02:51:53Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-resources-plugin:3.0.1:copy-resources 
> (copy-resources) on project hadoop-build-tools: Cannot create resource output 
> directory: 
> /home/CORP/btkuramototkr/hadoop/hadoop-build-tools/target/extra-resources -> 
> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
>  
> This problem can be solved by adding an option to change the path to maven's 
> local repository in the container so that users can remove backslashes from 
> their username.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17298) Backslash in username causes build failure in the environment started by start-build-env.sh.

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17298:

Labels: pull-request-available  (was: )

> Backslash in username causes build failure in the environment started by 
> start-build-env.sh.
> 
>
> Key: HADOOP-17298
> URL: https://issues.apache.org/jira/browse/HADOOP-17298
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Takeru Kuramoto
>Assignee: Takeru Kuramoto
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If a username includes a backslash, `mvn clean install` fails in an 
> environment started by start-build-env.sh.
> Here is my result in Amazon WorkSpaces.
>  
> {code:java}
> CORPbtkuramototkr@b8e750b1e386:/home/CORP\btkuramototkr/hadoop/hadoop-build-to
> ols$ mvn clean install
> /usr/bin/mvn: 1: cd: can't cd to 
> /home/CORtkuramototkr/hadoop/hadoop-build-tools/..
> [INFO] Scanning for projects...
> [INFO] 
> [INFO] < org.apache.hadoop:hadoop-build-tools 
> >
> [INFO] Building Apache Hadoop Build Tools 3.4.0-SNAPSHOT
> [INFO] [ jar 
> ]-
> [INFO] 
> [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-build-tools 
> ---
> [INFO] 
> [INFO] --- maven-resources-plugin:3.0.1:copy-resources (copy-resources) @ 
> hadoop-build-tools ---
> [INFO] Using 'UTF-8' encoding to copy filtered resources.
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 1.074 s
> [INFO] Finished at: 2020-10-05T02:51:53Z
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-resources-plugin:3.0.1:copy-resources 
> (copy-resources) on project hadoop-build-tools: Cannot create resource output 
> directory: 
> /home/CORP/btkuramototkr/hadoop/hadoop-build-tools/target/extra-resources -> 
> [Help 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {code}
>  
> This problem can be solved by adding an option to change the path to maven's 
> local repository in the container so that users can remove backslashes from 
> their username.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client IP via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


goiri commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501138092



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1901,4 +1904,27 @@ private DFSClient getFileDFSClient(final String path) {
 }
 return null;
   }
+
+  @Test
+  public void testMkdirsWithCallerContext() throws IOException {
+GenericTestUtils.LogCapturer auditlog =
+GenericTestUtils.LogCapturer.captureLogs(FSNamesystem.auditLog);
+
+// Current callerContext is null
+assertNull(CallerContext.getCurrent());
+
+// Set client context
+CallerContext.setCurrent(
+new CallerContext.Builder("clientContext").build());
+
+// Create a directory via the router
+String dirPath = "/test_dir_with_callercontext";
+FsPermission permission = new FsPermission("755");
+routerProtocol.mkdirs(dirPath, permission, false);
+
+// The audit log should contains "callerContext=clientContext,clientIp:"
+assertTrue(auditlog.getOutput()
+.contains("callerContext=clientContext,clientIp:"));

Review comment:
   Anyway we can check for the actual IP?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tkuramoto33 opened a new pull request #2367: HADOOP-17298. Backslash in username causes build failure in the environment started by start-build-env.sh.

2020-10-07 Thread GitBox


tkuramoto33 opened a new pull request #2367:
URL: https://github.com/apache/hadoop/pull/2367


   If a username includes a backslash, `mvn clean install` fails in an 
environment started by start-build-env.sh.
   
   The causes of this problem are as follows:
   1. start-build-env.sh sets the home directory in the Docker container to 
/home/${USER_NAME}. 
   2. Maven does not support path names such as "/home/CORP\name/". This type 
of path names appear in Amazon WorkSpaces. 
   
   Therefore, I have made it possible for users to change their home directory 
in the Docker container.
   
   I have tested this patch in Amazon WorkSpaces.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on a change in pull request #2363: HDFS-13293. RBF: The RouterRPCServer should transfer client ip via CallerContext to NamenodeRpcServer

2020-10-07 Thread GitBox


goiri commented on a change in pull request #2363:
URL: https://github.com/apache/hadoop/pull/2363#discussion_r501137208



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
##
@@ -116,10 +118,13 @@
   /** Optional perf monitor. */
   private final RouterRpcMonitor rpcMonitor;
 
+  private final Configuration clientConf;
+
   /** Pattern to parse a stack trace line. */
   private static final Pattern STACK_TRACE_PATTERN =
   Pattern.compile("\\tat (.*)\\.(.*)\\((.*):(\\d*)\\)");
 
+  private static final String CLIENT_IP_STR = "clientIp";

Review comment:
   Makes sense, let's just keep in mind HDFS-13248 when doing this.
   So far it looks like is covered.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=496710=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496710
 ]

ASF GitHub Bot logged work on HADOOP-17281:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 16:01
Start Date: 07/Oct/20 16:01
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-705034766


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  34m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  25m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 46s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 45s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 42s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  34m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  34m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  28m 38s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   4m 11s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 112 unchanged - 0 fixed = 113 total (was 
112)  |
   | +1 :green_heart: |  mvnsite  |   4m 31s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 35s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  14m 51s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 10s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  7s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestProtoBufRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 424cbd754c37 4.15.0-112-generic 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2354: HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread GitBox


hadoop-yetus commented on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-705034766


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 11s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   6m 16s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  36m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  34m 39s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  25m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   3m 48s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  29m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 46s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   8m 45s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 42s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   4m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  34m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  34m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 38s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  28m 38s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   4m 11s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 1 new + 112 unchanged - 0 fixed = 113 total (was 
112)  |
   | +1 :green_heart: |  mvnsite  |   4m 31s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  18m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 54s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   9m 35s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  14m 51s | 
[/patch-unit-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common in the patch passed.  |
   | +1 :green_heart: |  unit  |   3m 10s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m  7s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m  8s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 286m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.ipc.TestProtoBufRpc |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux 424cbd754c37 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 16aea11c945 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Commented] (HADOOP-17126) implement non-guava Precondition checkNotNull

2020-10-07 Thread Ahmed Hussein (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17209596#comment-17209596
 ] 

Ahmed Hussein commented on HADOOP-17126:


Hi [~ayushsaxena], Can you please take a look at that patch?

The failed Unit test is unrelated to code change.

> implement non-guava Precondition checkNotNull
> -
>
> Key: HADOOP-17126
> URL: https://issues.apache.org/jira/browse/HADOOP-17126
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
> Attachments: HADOOP-17126.001.patch, HADOOP-17126.002.patch
>
>
> As part In order to replace Guava Preconditions, we need to implement our own 
> versions of the API.
>  This Jira is to create {{checkNotNull}} in a new package dubbed {{unguava}}.
>  +The plan is as follows+
>  * create a new {{package org.apache.hadoop.util.unguava;}}
>  * {{create class Validate}}
>  * implement  {{package org.apache.hadoop.util.unguava.Validate;}} with the 
> following interface
>  ** {{checkNotNull(final T obj)}}
>  ** {{checkNotNull(final T reference, final Object errorMessage)}}
>  ** {{checkNotNull(final T obj, final String message, final Object... 
> values)}}
>  ** {{checkNotNull(final T obj,final Supplier msgSupplier)}}
>  * {{guava.preconditions used String.lenientformat which suppressed 
> exceptions caused by string formatting of the exception message . So, in 
> order to avoid changing the behavior, the implementation catches Exceptions 
> triggered by building the message (IllegalFormat, InsufficientArg, 
> NullPointer..etc)}}
>  * {{After merging the new class, we can replace 
> guava.Preconditions.checkNotNull by {{unguava.Validate.checkNotNull
>  * We need the change to go into trunk, 3.1, 3.2, and 3.3
>  
> Similar Jiras will be created to implement checkState, checkArgument, 
> checkIndex



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=496619=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496619
 ]

ASF GitHub Bot logged work on HADOOP-17281:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 13:58
Start Date: 07/Oct/20 13:58
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-704954857


   Thanks @steveloughran 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496619)
Time Spent: 3h  (was: 2h 50m)

> Implement FileSystem.listStatusIterator() in S3AFileSystem
> --
>
> Key: HADOOP-17281
> URL: https://issues.apache.org/jira/browse/HADOOP-17281
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> Currently S3AFileSystem only implements listStatus() api which returns an 
> array. Once we implement the listStatusIterator(), clients can benefit from 
> the async listing done recently 
> https://issues.apache.org/jira/browse/HADOOP-17074  by performing some tasks 
> on files while iterating them.
>  
> CC [~stevel]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mukund-thakur commented on pull request #2354: HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread GitBox


mukund-thakur commented on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-704954857


   Thanks @steveloughran 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17038) Support positional read in AbfsInputStream

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17038?focusedWorklogId=496612=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496612
 ]

ASF GitHub Bot logged work on HADOOP-17038:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 13:44
Start Date: 07/Oct/20 13:44
Worklog Time Spent: 10m 
  Work Description: anoopsjohn commented on pull request #2206:
URL: https://github.com/apache/hadoop/pull/2206#issuecomment-704945504


   oh sorry..  i closed my patch branch.. i will give new version based on 
openFile() way as discussed above..   Will include all tests in that too.  
Started working on new patch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496612)
Time Spent: 40m  (was: 0.5h)

> Support positional read in AbfsInputStream
> --
>
> Key: HADOOP-17038
> URL: https://issues.apache.org/jira/browse/HADOOP-17038
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
>  Labels: HBase, abfsactive, pull-request-available
> Attachments: HBase Perf Test Report.xlsx, screenshot-1.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Right now it will do a seek to the position , read and then seek back to the 
> old position.  (As per the impl in the super class)
> In HBase kind of workloads we rely mostly on short preads. (like 64 KB size 
> by default).  So would be ideal to support a pure pos read API which will not 
> even keep the data in a buffer but will only read the required data as what 
> is asked for by the caller. (Not reading ahead more data as per the read size 
> config)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anoopsjohn commented on pull request #2206: HADOOP-17038 Support positional read in AbfsInputStream

2020-10-07 Thread GitBox


anoopsjohn commented on pull request #2206:
URL: https://github.com/apache/hadoop/pull/2206#issuecomment-704945504


   oh sorry..  i closed my patch branch.. i will give new version based on 
openFile() way as discussed above..   Will include all tests in that too.  
Started working on new patch



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] coder-chenzhi commented on pull request #2358: HADOOP-17295 Move dedicated pre-logging statements into existing logg…

2020-10-07 Thread GitBox


coder-chenzhi commented on pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#issuecomment-704938615


   @steveloughran Appreciate your review. Your comment on the logging overhead 
is valuable for me. 
   
   I have another question that I would like to hear your thoughts. Do you 
prefer to reduce accumulated overhead or average overhead? The point is that 
some simple logging calls are on the hot path and frequently executed. The 
accumulated overhead for these simple logging calls is larger than other 
complex logging calls rarely executed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17295) Move dedicated pre-logging statements into existing logging guards

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17295?focusedWorklogId=496604=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496604
 ]

ASF GitHub Bot logged work on HADOOP-17295:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 13:32
Start Date: 07/Oct/20 13:32
Worklog Time Spent: 10m 
  Work Description: coder-chenzhi commented on pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#issuecomment-704938615


   @steveloughran Appreciate your review. Your comment on the logging overhead 
is valuable for me. 
   
   I have another question that I would like to hear your thoughts. Do you 
prefer to reduce accumulated overhead or average overhead? The point is that 
some simple logging calls are on the hot path and frequently executed. The 
accumulated overhead for these simple logging calls is larger than other 
complex logging calls rarely executed.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496604)
Time Spent: 1h  (was: 50m)

> Move dedicated pre-logging statements into existing logging guards
> --
>
> Key: HADOOP-17295
> URL: https://issues.apache.org/jira/browse/HADOOP-17295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhi
>Assignee: Chen Zhi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> I find some cases where some pre-processing statements dedicated to logging 
> calls are not guarded by existing logging guards. Most of them are easy to 
> fix. And the performance and maintainability of these logging calls can be 
> improved to some extend. So I create a PR to fix them.
> These issues are detected by a static analysis tool wrote by myself. This 
> tool can extract all the dedicated statements for each debug-logging calls 
> (i.e., the results of these statements are only used by debug-logging calls). 
> Because I realize that debug logs will incur overhead in production, such as 
> string concatenation and method calls in the parameters of logging calls as 
> well as pre-processing statements. And I want to perform a systematic 
> evaluation for the overhead of debugging logging calls in production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17125:
---
Fix Version/s: 3.4.0

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>  Time Spent: 24h 50m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17300) FileSystem.DirListingIterator.next() call should return NoSuchElementException

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17300.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

Fixed in HADOOP-17281, because the changes to the contract tests there found 
the bug

> FileSystem.DirListingIterator.next() call should return NoSuchElementException
> --
>
> Key: HADOOP-17300
> URL: https://issues.apache.org/jira/browse/HADOOP-17300
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common, fs
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
> Fix For: 3.3.1
>
>
> FileSystem.DirListingIterator.next() call should return 
> NoSuchElementException rather than IllegalStateException
>  
> Stacktrace for new test failure:
>  
> {code:java}
> java.lang.IllegalStateException: No more items in 
> iteratorjava.lang.IllegalStateException: No more items in iterator at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:507) at 
> org.apache.hadoop.fs.FileSystem$DirListingIterator.next(FileSystem.java:2232) 
> at 
> org.apache.hadoop.fs.FileSystem$DirListingIterator.next(FileSystem.java:2205) 
> at 
> org.apache.hadoop.fs.contract.ContractTestUtils.iteratorToListThroughNextCallsAlone(ContractTestUtils.java:1495)
>  at 
> org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest.testListStatusIteratorFile(AbstractContractGetFileStatusTest.java:366)
> {code}
>  
> CC [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17281?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17281.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

merged into 3.1+; looking forward to this. The HADOOP-16380 stats code will 
need to be wired up to this; I'm not doing it *yet* as a I don't want to rebase 
everything there

> Implement FileSystem.listStatusIterator() in S3AFileSystem
> --
>
> Key: HADOOP-17281
> URL: https://issues.apache.org/jira/browse/HADOOP-17281
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Currently S3AFileSystem only implements listStatus() api which returns an 
> array. Once we implement the listStatusIterator(), clients can benefit from 
> the async listing done recently 
> https://issues.apache.org/jira/browse/HADOOP-17074  by performing some tasks 
> on files while iterating them.
>  
> CC [~stevel]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] coder-chenzhi commented on a change in pull request #2358: HADOOP-17295 Move dedicated pre-logging statements into existing logg…

2020-10-07 Thread GitBox


coder-chenzhi commented on a change in pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500992285



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
##
@@ -566,11 +566,10 @@ public void writeLocalWrapperScript(Path launchDst, Path 
pidFile,
   @Override
   public boolean signalContainer(ContainerSignalContext ctx)
   throws IOException {
-String user = ctx.getUser();
 String pid = ctx.getPid();
 Signal signal = ctx.getSignal();
 LOG.debug("Sending signal {} to pid {} as user {}",

Review comment:
   Sorry for my mistake. I  did not notice that developer has changed this 
logging call to parameterized logging and removed the logging guard.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17295) Move dedicated pre-logging statements into existing logging guards

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17295?focusedWorklogId=496583=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496583
 ]

ASF GitHub Bot logged work on HADOOP-17295:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 13:01
Start Date: 07/Oct/20 13:01
Worklog Time Spent: 10m 
  Work Description: coder-chenzhi commented on a change in pull request 
#2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500992285



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
##
@@ -566,11 +566,10 @@ public void writeLocalWrapperScript(Path launchDst, Path 
pidFile,
   @Override
   public boolean signalContainer(ContainerSignalContext ctx)
   throws IOException {
-String user = ctx.getUser();
 String pid = ctx.getPid();
 Signal signal = ctx.getSignal();
 LOG.debug("Sending signal {} to pid {} as user {}",

Review comment:
   Sorry for my mistake. I  did not notice that developer has changed this 
logging call to parameterized logging and removed the logging guard.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496583)
Time Spent: 40m  (was: 0.5h)

> Move dedicated pre-logging statements into existing logging guards
> --
>
> Key: HADOOP-17295
> URL: https://issues.apache.org/jira/browse/HADOOP-17295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhi
>Assignee: Chen Zhi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> I find some cases where some pre-processing statements dedicated to logging 
> calls are not guarded by existing logging guards. Most of them are easy to 
> fix. And the performance and maintainability of these logging calls can be 
> improved to some extend. So I create a PR to fix them.
> These issues are detected by a static analysis tool wrote by myself. This 
> tool can extract all the dedicated statements for each debug-logging calls 
> (i.e., the results of these statements are only used by debug-logging calls). 
> Because I realize that debug logs will incur overhead in production, such as 
> string concatenation and method calls in the parameters of logging calls as 
> well as pre-processing statements. And I want to perform a systematic 
> evaluation for the overhead of debugging logging calls in production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17295) Move dedicated pre-logging statements into existing logging guards

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17295?focusedWorklogId=496584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496584
 ]

ASF GitHub Bot logged work on HADOOP-17295:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 13:01
Start Date: 07/Oct/20 13:01
Worklog Time Spent: 10m 
  Work Description: coder-chenzhi commented on a change in pull request 
#2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500992285



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
##
@@ -566,11 +566,10 @@ public void writeLocalWrapperScript(Path launchDst, Path 
pidFile,
   @Override
   public boolean signalContainer(ContainerSignalContext ctx)
   throws IOException {
-String user = ctx.getUser();
 String pid = ctx.getPid();
 Signal signal = ctx.getSignal();
 LOG.debug("Sending signal {} to pid {} as user {}",

Review comment:
   Sorry for my mistake. I  did not notice that developer has changed this 
logging call to parameterized logging and removed the logging guard. I will 
revert this change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496584)
Time Spent: 50m  (was: 40m)

> Move dedicated pre-logging statements into existing logging guards
> --
>
> Key: HADOOP-17295
> URL: https://issues.apache.org/jira/browse/HADOOP-17295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhi
>Assignee: Chen Zhi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> I find some cases where some pre-processing statements dedicated to logging 
> calls are not guarded by existing logging guards. Most of them are easy to 
> fix. And the performance and maintainability of these logging calls can be 
> improved to some extend. So I create a PR to fix them.
> These issues are detected by a static analysis tool wrote by myself. This 
> tool can extract all the dedicated statements for each debug-logging calls 
> (i.e., the results of these statements are only used by debug-logging calls). 
> Because I realize that debug logs will incur overhead in production, such as 
> string concatenation and method calls in the parameters of logging calls as 
> well as pre-processing statements. And I want to perform a systematic 
> evaluation for the overhead of debugging logging calls in production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] coder-chenzhi commented on a change in pull request #2358: HADOOP-17295 Move dedicated pre-logging statements into existing logg…

2020-10-07 Thread GitBox


coder-chenzhi commented on a change in pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500992285



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
##
@@ -566,11 +566,10 @@ public void writeLocalWrapperScript(Path launchDst, Path 
pidFile,
   @Override
   public boolean signalContainer(ContainerSignalContext ctx)
   throws IOException {
-String user = ctx.getUser();
 String pid = ctx.getPid();
 Signal signal = ctx.getSignal();
 LOG.debug("Sending signal {} to pid {} as user {}",

Review comment:
   Sorry for my mistake. I  did not notice that developer has changed this 
logging call to parameterized logging and removed the logging guard. I will 
revert this change.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=496580=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496580
 ]

ASF GitHub Bot logged work on HADOOP-17281:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 12:59
Start Date: 07/Oct/20 12:59
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496580)
Time Spent: 2h 50m  (was: 2h 40m)

> Implement FileSystem.listStatusIterator() in S3AFileSystem
> --
>
> Key: HADOOP-17281
> URL: https://issues.apache.org/jira/browse/HADOOP-17281
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Currently S3AFileSystem only implements listStatus() api which returns an 
> array. Once we implement the listStatusIterator(), clients can benefit from 
> the async listing done recently 
> https://issues.apache.org/jira/browse/HADOOP-17074  by performing some tasks 
> on files while iterating them.
>  
> CC [~stevel]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #2354: HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread GitBox


steveloughran merged pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] coder-chenzhi commented on a change in pull request #2358: HADOOP-17295 Move dedicated pre-logging statements into existing logg…

2020-10-07 Thread GitBox


coder-chenzhi commented on a change in pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500989847



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
##
@@ -486,12 +486,11 @@ DatanodeCommand cacheReport() throws IOException {
 
   cmd = bpNamenode.cacheReport(bpRegistration, bpid, blockIds);
   long sendTime = monotonicNow();
-  long createCost = createTime - startTime;
   long sendCost = sendTime - createTime;
   dn.getMetrics().addCacheReport(sendCost);
   if (LOG.isDebugEnabled()) {
 LOG.debug("CacheReport of " + blockIds.size()
-+ " block(s) took " + createCost + " msecs to generate and "
++ " block(s) took " + (createTime - startTime) + " msecs to 
generate and "

Review comment:
   IMHO, another reason for this change is that it can improve the 
maintainability, as the `sendCost` only used by the logging call. But I will 
leave it alone. We can not move the call to `monotonicNow()`, because the 
result is also used by  `dn.getMetrics().addCacheReport(sendCost)`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=496575=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496575
 ]

ASF GitHub Bot logged work on HADOOP-17281:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 12:57
Start Date: 07/Oct/20 12:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-704489524


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   5m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  39m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  33m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   4m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  31m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   9m 40s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  37m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  37m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  33m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  33m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   4m 50s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/4/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 112 unchanged - 0 fixed = 114 total (was 
112)  |
   | +1 :green_heart: |  mvnsite  |   4m 57s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  22m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |  12m 55s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m  0s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 10s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 323m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux c0562df480cc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4347a5c9556 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2354: HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread GitBox


hadoop-yetus removed a comment on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-703751138







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17295) Move dedicated pre-logging statements into existing logging guards

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17295?focusedWorklogId=496576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496576
 ]

ASF GitHub Bot logged work on HADOOP-17295:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 12:57
Start Date: 07/Oct/20 12:57
Worklog Time Spent: 10m 
  Work Description: coder-chenzhi commented on a change in pull request 
#2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500989847



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
##
@@ -486,12 +486,11 @@ DatanodeCommand cacheReport() throws IOException {
 
   cmd = bpNamenode.cacheReport(bpRegistration, bpid, blockIds);
   long sendTime = monotonicNow();
-  long createCost = createTime - startTime;
   long sendCost = sendTime - createTime;
   dn.getMetrics().addCacheReport(sendCost);
   if (LOG.isDebugEnabled()) {
 LOG.debug("CacheReport of " + blockIds.size()
-+ " block(s) took " + createCost + " msecs to generate and "
++ " block(s) took " + (createTime - startTime) + " msecs to 
generate and "

Review comment:
   IMHO, another reason for this change is that it can improve the 
maintainability, as the `sendCost` only used by the logging call. But I will 
leave it alone. We can not move the call to `monotonicNow()`, because the 
result is also used by  `dn.getMetrics().addCacheReport(sendCost)`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496576)
Time Spent: 0.5h  (was: 20m)

> Move dedicated pre-logging statements into existing logging guards
> --
>
> Key: HADOOP-17295
> URL: https://issues.apache.org/jira/browse/HADOOP-17295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhi
>Assignee: Chen Zhi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> I find some cases where some pre-processing statements dedicated to logging 
> calls are not guarded by existing logging guards. Most of them are easy to 
> fix. And the performance and maintainability of these logging calls can be 
> improved to some extend. So I create a PR to fix them.
> These issues are detected by a static analysis tool wrote by myself. This 
> tool can extract all the dedicated statements for each debug-logging calls 
> (i.e., the results of these statements are only used by debug-logging calls). 
> Because I realize that debug logs will incur overhead in production, such as 
> string concatenation and method calls in the parameters of logging calls as 
> well as pre-processing statements. And I want to perform a systematic 
> evaluation for the overhead of debugging logging calls in production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #2354: HADOOP-17281 Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread GitBox


hadoop-yetus removed a comment on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-704489524


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  1s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |   5m 50s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  37m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  39m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  compile  |  33m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   4m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 48s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  31m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   3m 47s |  |  trunk passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   9m 40s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 36s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  37m 56s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javac  |  37m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  33m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  javac  |  33m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   4m 50s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/4/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 112 unchanged - 0 fixed = 114 total (was 
112)  |
   | +1 :green_heart: |  mvnsite  |   4m 57s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  22m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   2m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1  |
   | +1 :green_heart: |  javadoc  |   5m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01  |
   | +1 :green_heart: |  findbugs  |  12m 55s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m  0s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 10s |  |  hadoop-hdfs-client in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 29s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   1m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 323m 55s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2354 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle markdownlint |
   | uname | Linux c0562df480cc 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4347a5c9556 |
   | Default Java | Private Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.8+10-post-Ubuntu-0ubuntu118.04.1 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_265-8u265-b01-0ubuntu2~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2354/4/testReport/ |
   | Max. process+thread count | 3117 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 

[jira] [Work logged] (HADOOP-17281) Implement FileSystem.listStatusIterator() in S3AFileSystem

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17281?focusedWorklogId=496574=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496574
 ]

ASF GitHub Bot logged work on HADOOP-17281:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 12:57
Start Date: 07/Oct/20 12:57
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #2354:
URL: https://github.com/apache/hadoop/pull/2354#issuecomment-703751138







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496574)
Time Spent: 2.5h  (was: 2h 20m)

> Implement FileSystem.listStatusIterator() in S3AFileSystem
> --
>
> Key: HADOOP-17281
> URL: https://issues.apache.org/jira/browse/HADOOP-17281
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs/s3
>Affects Versions: 3.3.0
>Reporter: Mukund Thakur
>Assignee: Mukund Thakur
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Currently S3AFileSystem only implements listStatus() api which returns an 
> array. Once we implement the listStatusIterator(), clients can benefit from 
> the async listing done recently 
> https://issues.apache.org/jira/browse/HADOOP-17074  by performing some tasks 
> on files while iterating them.
>  
> CC [~stevel]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?focusedWorklogId=496571=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496571
 ]

ASF GitHub Bot logged work on HADOOP-17125:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 12:56
Start Date: 07/Oct/20 12:56
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-704916354


   JIRA closed, added a release note.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496571)
Time Spent: 24h 50m  (was: 24h 40m)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 24h 50m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2297: HADOOP-17125. Using snappy-java in SnappyCodec

2020-10-07 Thread GitBox


steveloughran commented on pull request #2297:
URL: https://github.com/apache/hadoop/pull/2297#issuecomment-704916354


   JIRA closed, added a release note.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17125:

Release Note: The SnappyCodec uses the snappy-java compression library, 
rather than explicitly referencing native binaries.  It contains the native 
libraries for many operating systems and instruction sets, falling back to a 
pure java implementation. It does requires the snappy-java.jar is on the 
classpath. It can be found in hadoop-common/lib, and has already been present 
as part of the avro dependencies  (was: The SnappyCodec uses the java-snappy 
codec, rather than the native one. This means it works across more platforms 
-just make sure the snappy-java.jar is on the classpath. It can be found in 
hadoop-common/lib.)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17125:

Release Note: The SnappyCodec uses the java-snappy codec, rather than the 
native one. This means it works across more platforms -just make sure the 
snappy-java.jar is on the classpath. It can be found in hadoop-common/lib.  
(was: The SnappyCodec uses the java-snappy codec, rather than the native one. 
This means it works across platforms -just make sure the snappy-java.jar is on 
the classpath. It can be found in hadoop-common/lib)

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17125:

Release Note: The SnappyCodec uses the java-snappy codec, rather than the 
native one. This means it works across platforms -just make sure the 
snappy-java.jar is on the classpath. It can be found in hadoop-common/lib

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-17125:
---

Assignee: DB Tsai

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17125) Using snappy-java in SnappyCodec

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17125.
-
Fix Version/s: 3.3.1
   Resolution: Fixed

> Using snappy-java in SnappyCodec
> 
>
> Key: HADOOP-17125
> URL: https://issues.apache.org/jira/browse/HADOOP-17125
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.3.0
>Reporter: DB Tsai
>Assignee: DB Tsai
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> In Hadoop, we use native libs for snappy codec which has several 
> disadvantages:
>  * It requires native *libhadoop* and *libsnappy* to be installed in system 
> *LD_LIBRARY_PATH*, and they have to be installed separately on each node of 
> the clusters, container images, or local test environments which adds huge 
> complexities from deployment point of view. In some environments, it requires 
> compiling the natives from sources which is non-trivial. Also, this approach 
> is platform dependent; the binary may not work in different platform, so it 
> requires recompilation.
>  * It requires extra configuration of *java.library.path* to load the 
> natives, and it results higher application deployment and maintenance cost 
> for users.
> Projects such as *Spark* and *Parquet* use 
> [snappy-java|[https://github.com/xerial/snappy-java]] which is JNI-based 
> implementation. It contains native binaries for Linux, Mac, and IBM in jar 
> file, and it can automatically load the native binaries into JVM from jar 
> without any setup. If a native implementation can not be found for a 
> platform, it can fallback to pure-java implementation of snappy based on 
> [aircompressor|[https://github.com/airlift/aircompressor/tree/master/src/main/java/io/airlift/compress/snappy]].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17038) Support positional read in AbfsInputStream

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17038?focusedWorklogId=496560=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496560
 ]

ASF GitHub Bot logged work on HADOOP-17038:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 12:47
Start Date: 07/Oct/20 12:47
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2206:
URL: https://github.com/apache/hadoop/pull/2206#issuecomment-704911520


   why the close? I know it didn't quite work as is, but we should see what 
could be lifted/merged, in particular: tests



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496560)
Time Spent: 0.5h  (was: 20m)

> Support positional read in AbfsInputStream
> --
>
> Key: HADOOP-17038
> URL: https://issues.apache.org/jira/browse/HADOOP-17038
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
>  Labels: HBase, abfsactive, pull-request-available
> Attachments: HBase Perf Test Report.xlsx, screenshot-1.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Right now it will do a seek to the position , read and then seek back to the 
> old position.  (As per the impl in the super class)
> In HBase kind of workloads we rely mostly on short preads. (like 64 KB size 
> by default).  So would be ideal to support a pure pos read API which will not 
> even keep the data in a buffer but will only read the required data as what 
> is asked for by the caller. (Not reading ahead more data as per the read size 
> config)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2206: HADOOP-17038 Support positional read in AbfsInputStream

2020-10-07 Thread GitBox


steveloughran commented on pull request #2206:
URL: https://github.com/apache/hadoop/pull/2206#issuecomment-704911520


   why the close? I know it didn't quite work as is, but we should see what 
could be lifted/merged, in particular: tests



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17038) Support positional read in AbfsInputStream

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17038?focusedWorklogId=496490=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496490
 ]

ASF GitHub Bot logged work on HADOOP-17038:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 10:54
Start Date: 07/Oct/20 10:54
Worklog Time Spent: 10m 
  Work Description: anoopsjohn closed pull request #2206:
URL: https://github.com/apache/hadoop/pull/2206


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496490)
Time Spent: 20m  (was: 10m)

> Support positional read in AbfsInputStream
> --
>
> Key: HADOOP-17038
> URL: https://issues.apache.org/jira/browse/HADOOP-17038
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Major
>  Labels: HBase, abfsactive, pull-request-available
> Attachments: HBase Perf Test Report.xlsx, screenshot-1.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Right now it will do a seek to the position , read and then seek back to the 
> old position.  (As per the impl in the super class)
> In HBase kind of workloads we rely mostly on short preads. (like 64 KB size 
> by default).  So would be ideal to support a pure pos read API which will not 
> even keep the data in a buffer but will only read the required data as what 
> is asked for by the caller. (Not reading ahead more data as per the read size 
> config)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] anoopsjohn closed pull request #2206: HADOOP-17038 Support positional read in AbfsInputStream

2020-10-07 Thread GitBox


anoopsjohn closed pull request #2206:
URL: https://github.com/apache/hadoop/pull/2206


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17295) Move dedicated pre-logging statements into existing logging guards

2020-10-07 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-17295:
---

Assignee: Chen Zhi

> Move dedicated pre-logging statements into existing logging guards
> --
>
> Key: HADOOP-17295
> URL: https://issues.apache.org/jira/browse/HADOOP-17295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhi
>Assignee: Chen Zhi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I find some cases where some pre-processing statements dedicated to logging 
> calls are not guarded by existing logging guards. Most of them are easy to 
> fix. And the performance and maintainability of these logging calls can be 
> improved to some extend. So I create a PR to fix them.
> These issues are detected by a static analysis tool wrote by myself. This 
> tool can extract all the dedicated statements for each debug-logging calls 
> (i.e., the results of these statements are only used by debug-logging calls). 
> Because I realize that debug logs will incur overhead in production, such as 
> string concatenation and method calls in the parameters of logging calls as 
> well as pre-processing statements. And I want to perform a systematic 
> evaluation for the overhead of debugging logging calls in production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17295) Move dedicated pre-logging statements into existing logging guards

2020-10-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17295?focusedWorklogId=496465=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-496465
 ]

ASF GitHub Bot logged work on HADOOP-17295:
---

Author: ASF GitHub Bot
Created on: 07/Oct/20 10:26
Start Date: 07/Oct/20 10:26
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500899509



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
##
@@ -566,11 +566,10 @@ public void writeLocalWrapperScript(Path launchDst, Path 
pidFile,
   @Override
   public boolean signalContainer(ContainerSignalContext ctx)
   throws IOException {
-String user = ctx.getUser();
 String pid = ctx.getPid();
 Signal signal = ctx.getSignal();
 LOG.debug("Sending signal {} to pid {} as user {}",

Review comment:
   unless the debug is guarded, same cost

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
##
@@ -486,12 +486,11 @@ DatanodeCommand cacheReport() throws IOException {
 
   cmd = bpNamenode.cacheReport(bpRegistration, bpid, blockIds);
   long sendTime = monotonicNow();
-  long createCost = createTime - startTime;
   long sendCost = sendTime - createTime;
   dn.getMetrics().addCacheReport(sendCost);
   if (LOG.isDebugEnabled()) {
 LOG.debug("CacheReport of " + blockIds.size()
-+ " block(s) took " + createCost + " msecs to generate and "
++ " block(s) took " + (createTime - startTime) + " msecs to 
generate and "

Review comment:
   the cost of a subtraction is marginal. Moving that monotonicNow() would 
be more significant, as if it is using the x86 `RDSTC` call, its a barrier 
function which blocks all speculative execution until complete





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 496465)
Time Spent: 20m  (was: 10m)

> Move dedicated pre-logging statements into existing logging guards
> --
>
> Key: HADOOP-17295
> URL: https://issues.apache.org/jira/browse/HADOOP-17295
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chen Zhi
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I find some cases where some pre-processing statements dedicated to logging 
> calls are not guarded by existing logging guards. Most of them are easy to 
> fix. And the performance and maintainability of these logging calls can be 
> improved to some extend. So I create a PR to fix them.
> These issues are detected by a static analysis tool wrote by myself. This 
> tool can extract all the dedicated statements for each debug-logging calls 
> (i.e., the results of these statements are only used by debug-logging calls). 
> Because I realize that debug logs will incur overhead in production, such as 
> string concatenation and method calls in the parameters of logging calls as 
> well as pre-processing statements. And I want to perform a systematic 
> evaluation for the overhead of debugging logging calls in production.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #2358: HADOOP-17295 Move dedicated pre-logging statements into existing logg…

2020-10-07 Thread GitBox


steveloughran commented on a change in pull request #2358:
URL: https://github.com/apache/hadoop/pull/2358#discussion_r500899509



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/DefaultContainerExecutor.java
##
@@ -566,11 +566,10 @@ public void writeLocalWrapperScript(Path launchDst, Path 
pidFile,
   @Override
   public boolean signalContainer(ContainerSignalContext ctx)
   throws IOException {
-String user = ctx.getUser();
 String pid = ctx.getPid();
 Signal signal = ctx.getSignal();
 LOG.debug("Sending signal {} to pid {} as user {}",

Review comment:
   unless the debug is guarded, same cost

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java
##
@@ -486,12 +486,11 @@ DatanodeCommand cacheReport() throws IOException {
 
   cmd = bpNamenode.cacheReport(bpRegistration, bpid, blockIds);
   long sendTime = monotonicNow();
-  long createCost = createTime - startTime;
   long sendCost = sendTime - createTime;
   dn.getMetrics().addCacheReport(sendCost);
   if (LOG.isDebugEnabled()) {
 LOG.debug("CacheReport of " + blockIds.size()
-+ " block(s) took " + createCost + " msecs to generate and "
++ " block(s) took " + (createTime - startTime) + " msecs to 
generate and "

Review comment:
   the cost of a subtraction is marginal. Moving that monotonicNow() would 
be more significant, as if it is using the x86 `RDSTC` call, its a barrier 
function which blocks all speculative execution until complete





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >